00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3468 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3079 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.130 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.156 Using shallow fetch with depth 1 00:00:00.156 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.156 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.172 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.173 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.149 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.159 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.170 Checking out Revision 71481c63295b6b9f0ecef6c6e69e033a6109160a (FETCH_HEAD) 00:00:06.170 > git config core.sparsecheckout # timeout=10 00:00:06.179 > git read-tree -mu HEAD # timeout=10 00:00:06.194 > git checkout -f 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=5 00:00:06.210 Commit message: "jenkins/jjb-config: Disable bsc job until further notice" 00:00:06.210 > git rev-list --no-walk 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=10 00:00:06.291 [Pipeline] Start of Pipeline 00:00:06.303 [Pipeline] library 00:00:06.305 Loading library shm_lib@master 00:00:06.305 Library shm_lib@master is cached. Copying from home. 00:00:06.319 [Pipeline] node 00:00:06.326 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.328 [Pipeline] { 00:00:06.338 [Pipeline] catchError 00:00:06.339 [Pipeline] { 00:00:06.350 [Pipeline] wrap 00:00:06.358 [Pipeline] { 00:00:06.363 [Pipeline] stage 00:00:06.364 [Pipeline] { (Prologue) 00:00:06.556 [Pipeline] sh 00:00:06.838 + logger -p user.info -t JENKINS-CI 00:00:06.859 [Pipeline] echo 00:00:06.860 Node: GP11 00:00:06.869 [Pipeline] sh 00:00:07.173 [Pipeline] setCustomBuildProperty 00:00:07.186 [Pipeline] echo 00:00:07.188 Cleanup processes 00:00:07.194 [Pipeline] sh 00:00:07.479 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.479 114438 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.493 [Pipeline] sh 00:00:07.776 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.776 ++ grep -v 'sudo pgrep' 00:00:07.776 ++ awk '{print $1}' 00:00:07.776 + sudo kill -9 00:00:07.776 + true 00:00:07.790 [Pipeline] cleanWs 00:00:07.800 [WS-CLEANUP] Deleting project workspace... 00:00:07.800 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.806 [WS-CLEANUP] done 00:00:07.811 [Pipeline] setCustomBuildProperty 00:00:07.826 [Pipeline] sh 00:00:08.108 + sudo git config --global --replace-all safe.directory '*' 00:00:08.191 [Pipeline] nodesByLabel 00:00:08.225 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.236 [Pipeline] httpRequest 00:00:08.241 HttpMethod: GET 00:00:08.242 URL: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:08.246 Sending request to url: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:08.257 Response Code: HTTP/1.1 200 OK 00:00:08.258 Success: Status code 200 is in the accepted range: 200,404 00:00:08.258 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:09.811 [Pipeline] sh 00:00:10.096 + tar --no-same-owner -xf jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:10.116 [Pipeline] httpRequest 00:00:10.121 HttpMethod: GET 00:00:10.121 URL: http://10.211.164.101/packages/spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:00:10.122 Sending request to url: http://10.211.164.101/packages/spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:00:10.133 Response Code: HTTP/1.1 200 OK 00:00:10.133 Success: Status code 200 is in the accepted range: 200,404 00:00:10.134 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:01:23.925 [Pipeline] sh 00:01:24.213 + tar --no-same-owner -xf spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:01:27.531 [Pipeline] sh 00:01:27.817 + git -C spdk log --oneline -n5 00:01:27.817 dafdb289f raid: allow re-adding a base bdev with superblock 00:01:27.817 b694ff865 raid: add callback to raid_bdev_examine_sb() 00:01:27.817 30c08caa3 test/raid: always create pt bdevs in rebuild test 00:01:27.817 e2f90f3c7 test/raid: remove unnecessary recreating of base bdevs 00:01:27.817 bad11eeac raid: keep raid bdev in CONFIGURING state when last base bdev is removed 00:01:27.837 [Pipeline] withCredentials 00:01:27.849 > git --version # timeout=10 00:01:27.861 > git --version # 'git version 2.39.2' 00:01:27.883 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:27.885 [Pipeline] { 00:01:27.896 [Pipeline] retry 00:01:27.898 [Pipeline] { 00:01:27.918 [Pipeline] sh 00:01:28.208 + git ls-remote http://dpdk.org/git/dpdk main 00:01:28.221 [Pipeline] } 00:01:28.243 [Pipeline] // retry 00:01:28.247 [Pipeline] } 00:01:28.269 [Pipeline] // withCredentials 00:01:28.281 [Pipeline] httpRequest 00:01:28.286 HttpMethod: GET 00:01:28.287 URL: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:28.289 Sending request to url: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:28.293 Response Code: HTTP/1.1 200 OK 00:01:28.294 Success: Status code 200 is in the accepted range: 200,404 00:01:28.294 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:33.466 [Pipeline] sh 00:01:33.745 + tar --no-same-owner -xf dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:35.142 [Pipeline] sh 00:01:35.428 + git -C dpdk log --oneline -n5 00:01:35.428 7e06c0de19 examples: move alignment attribute on types for MSVC 00:01:35.428 27595cd830 drivers: move alignment attribute on types for MSVC 00:01:35.428 0efea35a2b app: move alignment attribute on types for MSVC 00:01:35.428 e2e546ab5b version: 24.07-rc0 00:01:35.428 a9778aad62 version: 24.03.0 00:01:35.440 [Pipeline] } 00:01:35.456 [Pipeline] // stage 00:01:35.465 [Pipeline] stage 00:01:35.467 [Pipeline] { (Prepare) 00:01:35.487 [Pipeline] writeFile 00:01:35.503 [Pipeline] sh 00:01:35.788 + logger -p user.info -t JENKINS-CI 00:01:35.802 [Pipeline] sh 00:01:36.087 + logger -p user.info -t JENKINS-CI 00:01:36.100 [Pipeline] sh 00:01:36.385 + cat autorun-spdk.conf 00:01:36.385 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.385 SPDK_TEST_NVMF=1 00:01:36.385 SPDK_TEST_NVME_CLI=1 00:01:36.385 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.385 SPDK_TEST_NVMF_NICS=e810 00:01:36.385 SPDK_TEST_VFIOUSER=1 00:01:36.385 SPDK_RUN_UBSAN=1 00:01:36.385 NET_TYPE=phy 00:01:36.385 SPDK_TEST_NATIVE_DPDK=main 00:01:36.385 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.393 RUN_NIGHTLY=1 00:01:36.397 [Pipeline] readFile 00:01:36.421 [Pipeline] withEnv 00:01:36.423 [Pipeline] { 00:01:36.437 [Pipeline] sh 00:01:36.723 + set -ex 00:01:36.723 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:36.723 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:36.723 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.723 ++ SPDK_TEST_NVMF=1 00:01:36.723 ++ SPDK_TEST_NVME_CLI=1 00:01:36.723 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.724 ++ SPDK_TEST_NVMF_NICS=e810 00:01:36.724 ++ SPDK_TEST_VFIOUSER=1 00:01:36.724 ++ SPDK_RUN_UBSAN=1 00:01:36.724 ++ NET_TYPE=phy 00:01:36.724 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:36.724 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.724 ++ RUN_NIGHTLY=1 00:01:36.724 + case $SPDK_TEST_NVMF_NICS in 00:01:36.724 + DRIVERS=ice 00:01:36.724 + [[ tcp == \r\d\m\a ]] 00:01:36.724 + [[ -n ice ]] 00:01:36.724 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:36.724 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:36.724 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:36.724 rmmod: ERROR: Module irdma is not currently loaded 00:01:36.724 rmmod: ERROR: Module i40iw is not currently loaded 00:01:36.724 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:36.724 + true 00:01:36.724 + for D in $DRIVERS 00:01:36.724 + sudo modprobe ice 00:01:36.724 + exit 0 00:01:36.734 [Pipeline] } 00:01:36.752 [Pipeline] // withEnv 00:01:36.758 [Pipeline] } 00:01:36.774 [Pipeline] // stage 00:01:36.784 [Pipeline] catchError 00:01:36.785 [Pipeline] { 00:01:36.800 [Pipeline] timeout 00:01:36.801 Timeout set to expire in 40 min 00:01:36.802 [Pipeline] { 00:01:36.819 [Pipeline] stage 00:01:36.821 [Pipeline] { (Tests) 00:01:36.837 [Pipeline] sh 00:01:37.121 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.121 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.121 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.121 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:37.122 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.122 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:37.122 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:37.122 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:37.122 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:37.122 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:37.122 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.122 + source /etc/os-release 00:01:37.122 ++ NAME='Fedora Linux' 00:01:37.122 ++ VERSION='38 (Cloud Edition)' 00:01:37.122 ++ ID=fedora 00:01:37.122 ++ VERSION_ID=38 00:01:37.122 ++ VERSION_CODENAME= 00:01:37.122 ++ PLATFORM_ID=platform:f38 00:01:37.122 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:37.122 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:37.122 ++ LOGO=fedora-logo-icon 00:01:37.122 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:37.122 ++ HOME_URL=https://fedoraproject.org/ 00:01:37.122 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:37.122 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:37.122 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:37.122 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:37.122 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:37.122 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:37.122 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:37.122 ++ SUPPORT_END=2024-05-14 00:01:37.122 ++ VARIANT='Cloud Edition' 00:01:37.122 ++ VARIANT_ID=cloud 00:01:37.122 + uname -a 00:01:37.122 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:37.122 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:38.064 Hugepages 00:01:38.064 node hugesize free / total 00:01:38.064 node0 1048576kB 0 / 0 00:01:38.064 node0 2048kB 0 / 0 00:01:38.064 node1 1048576kB 0 / 0 00:01:38.064 node1 2048kB 0 / 0 00:01:38.064 00:01:38.064 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.064 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:38.064 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:38.064 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:38.064 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:38.064 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:38.064 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:38.064 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:38.064 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:38.064 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:38.064 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:38.064 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:38.064 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:38.064 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:38.064 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:38.064 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:38.064 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:38.064 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:38.064 + rm -f /tmp/spdk-ld-path 00:01:38.064 + source autorun-spdk.conf 00:01:38.064 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.064 ++ SPDK_TEST_NVMF=1 00:01:38.064 ++ SPDK_TEST_NVME_CLI=1 00:01:38.064 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.064 ++ SPDK_TEST_NVMF_NICS=e810 00:01:38.064 ++ SPDK_TEST_VFIOUSER=1 00:01:38.064 ++ SPDK_RUN_UBSAN=1 00:01:38.064 ++ NET_TYPE=phy 00:01:38.064 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:38.064 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.064 ++ RUN_NIGHTLY=1 00:01:38.064 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.064 + [[ -n '' ]] 00:01:38.064 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.064 + for M in /var/spdk/build-*-manifest.txt 00:01:38.064 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.064 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:38.064 + for M in /var/spdk/build-*-manifest.txt 00:01:38.064 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.064 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:38.064 ++ uname 00:01:38.064 + [[ Linux == \L\i\n\u\x ]] 00:01:38.064 + sudo dmesg -T 00:01:38.064 + sudo dmesg --clear 00:01:38.324 + dmesg_pid=115757 00:01:38.324 + [[ Fedora Linux == FreeBSD ]] 00:01:38.324 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.324 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.324 + sudo dmesg -Tw 00:01:38.324 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.324 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.324 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.324 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.324 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.324 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.324 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.324 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.324 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.324 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.324 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.324 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.324 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:38.324 Test configuration: 00:01:38.324 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.324 SPDK_TEST_NVMF=1 00:01:38.324 SPDK_TEST_NVME_CLI=1 00:01:38.324 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.324 SPDK_TEST_NVMF_NICS=e810 00:01:38.324 SPDK_TEST_VFIOUSER=1 00:01:38.324 SPDK_RUN_UBSAN=1 00:01:38.324 NET_TYPE=phy 00:01:38.324 SPDK_TEST_NATIVE_DPDK=main 00:01:38.324 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.324 RUN_NIGHTLY=1 02:42:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:38.324 02:42:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.324 02:42:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.324 02:42:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.324 02:42:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.324 02:42:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.324 02:42:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.324 02:42:28 -- paths/export.sh@5 -- $ export PATH 00:01:38.324 02:42:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.324 02:42:28 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:38.324 02:42:28 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:38.324 02:42:28 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715560948.XXXXXX 00:01:38.324 02:42:28 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715560948.Om9cif 00:01:38.324 02:42:28 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:38.324 02:42:28 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:01:38.324 02:42:28 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.324 02:42:28 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:38.324 02:42:28 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:38.324 02:42:28 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.324 02:42:28 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:38.324 02:42:28 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:38.324 02:42:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.324 02:42:28 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:38.324 02:42:28 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:38.324 02:42:28 -- pm/common@17 -- $ local monitor 00:01:38.324 02:42:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.324 02:42:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.324 02:42:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.324 02:42:28 -- pm/common@21 -- $ date +%s 00:01:38.324 02:42:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.324 02:42:28 -- pm/common@21 -- $ date +%s 00:01:38.324 02:42:28 -- pm/common@25 -- $ sleep 1 00:01:38.324 02:42:28 -- pm/common@21 -- $ date +%s 00:01:38.324 02:42:28 -- pm/common@21 -- $ date +%s 00:01:38.324 02:42:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715560948 00:01:38.324 02:42:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715560948 00:01:38.324 02:42:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715560948 00:01:38.324 02:42:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715560948 00:01:38.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715560948_collect-vmstat.pm.log 00:01:38.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715560948_collect-cpu-load.pm.log 00:01:38.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715560948_collect-cpu-temp.pm.log 00:01:38.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715560948_collect-bmc-pm.bmc.pm.log 00:01:39.264 02:42:29 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:39.264 02:42:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:39.264 02:42:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:39.264 02:42:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.264 02:42:29 -- spdk/autobuild.sh@16 -- $ date -u 00:01:39.264 Mon May 13 12:42:29 AM UTC 2024 00:01:39.264 02:42:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:39.264 v24.05-pre-583-gdafdb289f 00:01:39.264 02:42:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:39.264 02:42:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:39.264 02:42:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:39.264 02:42:29 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:39.264 02:42:29 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:39.264 02:42:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.264 ************************************ 00:01:39.264 START TEST ubsan 00:01:39.264 ************************************ 00:01:39.264 02:42:30 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:39.264 using ubsan 00:01:39.264 00:01:39.264 real 0m0.000s 00:01:39.264 user 0m0.000s 00:01:39.264 sys 0m0.000s 00:01:39.264 02:42:30 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:39.264 02:42:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.264 ************************************ 00:01:39.264 END TEST ubsan 00:01:39.264 ************************************ 00:01:39.264 02:42:30 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:39.264 02:42:30 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:39.264 02:42:30 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:39.264 02:42:30 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:39.264 02:42:30 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:39.264 02:42:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.264 ************************************ 00:01:39.264 START TEST build_native_dpdk 00:01:39.264 ************************************ 00:01:39.264 02:42:30 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:39.264 02:42:30 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:39.264 02:42:30 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:39.523 02:42:30 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:39.524 7e06c0de19 examples: move alignment attribute on types for MSVC 00:01:39.524 27595cd830 drivers: move alignment attribute on types for MSVC 00:01:39.524 0efea35a2b app: move alignment attribute on types for MSVC 00:01:39.524 e2e546ab5b version: 24.07-rc0 00:01:39.524 a9778aad62 version: 24.03.0 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc0 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc0 21.11.0 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc0 '<' 21.11.0 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:39.524 02:42:30 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:39.524 patching file config/rte_config.h 00:01:39.524 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:39.524 02:42:30 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:43.712 The Meson build system 00:01:43.712 Version: 1.3.1 00:01:43.712 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:43.712 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:43.712 Build type: native build 00:01:43.712 Program cat found: YES (/usr/bin/cat) 00:01:43.712 Project name: DPDK 00:01:43.712 Project version: 24.07.0-rc0 00:01:43.712 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:43.712 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:43.712 Host machine cpu family: x86_64 00:01:43.712 Host machine cpu: x86_64 00:01:43.712 Message: ## Building in Developer Mode ## 00:01:43.712 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.712 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:43.712 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.712 Program python3 found: YES (/usr/bin/python3) 00:01:43.712 Program cat found: YES (/usr/bin/cat) 00:01:43.712 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:43.712 Compiler for C supports arguments -march=native: YES 00:01:43.712 Checking for size of "void *" : 8 00:01:43.712 Checking for size of "void *" : 8 (cached) 00:01:43.712 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:43.712 Library m found: YES 00:01:43.712 Library numa found: YES 00:01:43.712 Has header "numaif.h" : YES 00:01:43.712 Library fdt found: NO 00:01:43.712 Library execinfo found: NO 00:01:43.712 Has header "execinfo.h" : YES 00:01:43.712 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:43.712 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.712 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.712 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.712 Run-time dependency openssl found: YES 3.0.9 00:01:43.712 Run-time dependency libpcap found: YES 1.10.4 00:01:43.712 Has header "pcap.h" with dependency libpcap: YES 00:01:43.712 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.712 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.712 Compiler for C supports arguments -Wformat: YES 00:01:43.712 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.712 Compiler for C supports arguments -Wformat-security: NO 00:01:43.713 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.713 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.713 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.713 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.713 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.713 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.713 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.713 Compiler for C supports arguments -Wundef: YES 00:01:43.713 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.713 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.713 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.713 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.713 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.713 Program objdump found: YES (/usr/bin/objdump) 00:01:43.713 Compiler for C supports arguments -mavx512f: YES 00:01:43.713 Checking if "AVX512 checking" compiles: YES 00:01:43.713 Fetching value of define "__SSE4_2__" : 1 00:01:43.713 Fetching value of define "__AES__" : 1 00:01:43.713 Fetching value of define "__AVX__" : 1 00:01:43.713 Fetching value of define "__AVX2__" : (undefined) 00:01:43.713 Fetching value of define "__AVX512BW__" : (undefined) 00:01:43.713 Fetching value of define "__AVX512CD__" : (undefined) 00:01:43.713 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:43.713 Fetching value of define "__AVX512F__" : (undefined) 00:01:43.713 Fetching value of define "__AVX512VL__" : (undefined) 00:01:43.713 Fetching value of define "__PCLMUL__" : 1 00:01:43.713 Fetching value of define "__RDRND__" : 1 00:01:43.713 Fetching value of define "__RDSEED__" : (undefined) 00:01:43.713 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:43.713 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.713 Message: lib/log: Defining dependency "log" 00:01:43.713 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.713 Message: lib/argparse: Defining dependency "argparse" 00:01:43.713 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.713 Checking for function "getentropy" : NO 00:01:43.713 Message: lib/eal: Defining dependency "eal" 00:01:43.713 Message: lib/ring: Defining dependency "ring" 00:01:43.713 Message: lib/rcu: Defining dependency "rcu" 00:01:43.713 Message: lib/mempool: Defining dependency "mempool" 00:01:43.713 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.713 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.713 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.713 Compiler for C supports arguments -mpclmul: YES 00:01:43.713 Compiler for C supports arguments -maes: YES 00:01:43.713 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.713 Compiler for C supports arguments -mavx512bw: YES 00:01:43.713 Compiler for C supports arguments -mavx512dq: YES 00:01:43.713 Compiler for C supports arguments -mavx512vl: YES 00:01:43.713 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.713 Compiler for C supports arguments -mavx2: YES 00:01:43.713 Compiler for C supports arguments -mavx: YES 00:01:43.713 Message: lib/net: Defining dependency "net" 00:01:43.713 Message: lib/meter: Defining dependency "meter" 00:01:43.713 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.713 Message: lib/pci: Defining dependency "pci" 00:01:43.713 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.713 Message: lib/metrics: Defining dependency "metrics" 00:01:43.713 Message: lib/hash: Defining dependency "hash" 00:01:43.713 Message: lib/timer: Defining dependency "timer" 00:01:43.713 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.713 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:43.713 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:43.713 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:43.713 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:43.713 Message: lib/acl: Defining dependency "acl" 00:01:43.713 Message: lib/bbdev: Defining dependency "bbdev" 00:01:43.713 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:43.713 Run-time dependency libelf found: YES 0.190 00:01:43.713 Message: lib/bpf: Defining dependency "bpf" 00:01:43.713 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:43.713 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.713 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.713 Message: lib/distributor: Defining dependency "distributor" 00:01:43.713 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.713 Message: lib/efd: Defining dependency "efd" 00:01:43.713 Message: lib/eventdev: Defining dependency "eventdev" 00:01:43.713 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:43.713 Message: lib/gpudev: Defining dependency "gpudev" 00:01:43.713 Message: lib/gro: Defining dependency "gro" 00:01:43.713 Message: lib/gso: Defining dependency "gso" 00:01:43.713 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:43.713 Message: lib/jobstats: Defining dependency "jobstats" 00:01:43.713 Message: lib/latencystats: Defining dependency "latencystats" 00:01:43.713 Message: lib/lpm: Defining dependency "lpm" 00:01:43.713 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.713 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:43.713 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:43.713 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:43.713 Message: lib/member: Defining dependency "member" 00:01:43.713 Message: lib/pcapng: Defining dependency "pcapng" 00:01:43.713 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.713 Message: lib/power: Defining dependency "power" 00:01:43.713 Message: lib/rawdev: Defining dependency "rawdev" 00:01:43.713 Message: lib/regexdev: Defining dependency "regexdev" 00:01:43.713 Message: lib/mldev: Defining dependency "mldev" 00:01:43.713 Message: lib/rib: Defining dependency "rib" 00:01:43.713 Message: lib/reorder: Defining dependency "reorder" 00:01:43.713 Message: lib/sched: Defining dependency "sched" 00:01:43.713 Message: lib/security: Defining dependency "security" 00:01:43.713 Message: lib/stack: Defining dependency "stack" 00:01:43.713 Has header "linux/userfaultfd.h" : YES 00:01:43.713 Has header "linux/vduse.h" : YES 00:01:43.713 Message: lib/vhost: Defining dependency "vhost" 00:01:43.713 Message: lib/ipsec: Defining dependency "ipsec" 00:01:43.713 Message: lib/pdcp: Defining dependency "pdcp" 00:01:43.713 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.713 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:43.713 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:43.713 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:43.713 Message: lib/fib: Defining dependency "fib" 00:01:43.713 Message: lib/port: Defining dependency "port" 00:01:43.713 Message: lib/pdump: Defining dependency "pdump" 00:01:43.713 Message: lib/table: Defining dependency "table" 00:01:43.713 Message: lib/pipeline: Defining dependency "pipeline" 00:01:43.713 Message: lib/graph: Defining dependency "graph" 00:01:43.713 Message: lib/node: Defining dependency "node" 00:01:43.713 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.088 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.088 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.088 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.088 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:45.088 Compiler for C supports arguments -Wno-unused-value: YES 00:01:45.088 Compiler for C supports arguments -Wno-format: YES 00:01:45.088 Compiler for C supports arguments -Wno-format-security: YES 00:01:45.088 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:45.088 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:45.088 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:45.088 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:45.088 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:45.088 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.088 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:45.088 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:45.088 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:45.088 Has header "sys/epoll.h" : YES 00:01:45.088 Program doxygen found: YES (/usr/bin/doxygen) 00:01:45.088 Configuring doxy-api-html.conf using configuration 00:01:45.088 Configuring doxy-api-man.conf using configuration 00:01:45.088 Program mandb found: YES (/usr/bin/mandb) 00:01:45.088 Program sphinx-build found: NO 00:01:45.088 Configuring rte_build_config.h using configuration 00:01:45.088 Message: 00:01:45.088 ================= 00:01:45.088 Applications Enabled 00:01:45.088 ================= 00:01:45.088 00:01:45.088 apps: 00:01:45.088 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:45.089 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:45.089 test-pmd, test-regex, test-sad, test-security-perf, 00:01:45.089 00:01:45.089 Message: 00:01:45.089 ================= 00:01:45.089 Libraries Enabled 00:01:45.089 ================= 00:01:45.089 00:01:45.089 libs: 00:01:45.089 log, kvargs, argparse, telemetry, eal, ring, rcu, mempool, 00:01:45.089 mbuf, net, meter, ethdev, pci, cmdline, metrics, hash, 00:01:45.089 timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, 00:01:45.089 distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, 00:01:45.089 ip_frag, jobstats, latencystats, lpm, member, pcapng, power, rawdev, 00:01:45.089 regexdev, mldev, rib, reorder, sched, security, stack, vhost, 00:01:45.089 ipsec, pdcp, fib, port, pdump, table, pipeline, graph, 00:01:45.089 node, 00:01:45.089 00:01:45.089 Message: 00:01:45.089 =============== 00:01:45.089 Drivers Enabled 00:01:45.089 =============== 00:01:45.089 00:01:45.089 common: 00:01:45.089 00:01:45.089 bus: 00:01:45.089 pci, vdev, 00:01:45.089 mempool: 00:01:45.089 ring, 00:01:45.089 dma: 00:01:45.089 00:01:45.089 net: 00:01:45.089 i40e, 00:01:45.089 raw: 00:01:45.089 00:01:45.089 crypto: 00:01:45.089 00:01:45.089 compress: 00:01:45.089 00:01:45.089 regex: 00:01:45.089 00:01:45.089 ml: 00:01:45.089 00:01:45.089 vdpa: 00:01:45.089 00:01:45.089 event: 00:01:45.089 00:01:45.089 baseband: 00:01:45.089 00:01:45.089 gpu: 00:01:45.089 00:01:45.089 00:01:45.089 Message: 00:01:45.089 ================= 00:01:45.089 Content Skipped 00:01:45.089 ================= 00:01:45.089 00:01:45.089 apps: 00:01:45.089 00:01:45.089 libs: 00:01:45.089 00:01:45.089 drivers: 00:01:45.089 common/cpt: not in enabled drivers build config 00:01:45.089 common/dpaax: not in enabled drivers build config 00:01:45.089 common/iavf: not in enabled drivers build config 00:01:45.089 common/idpf: not in enabled drivers build config 00:01:45.089 common/ionic: not in enabled drivers build config 00:01:45.089 common/mvep: not in enabled drivers build config 00:01:45.089 common/octeontx: not in enabled drivers build config 00:01:45.089 bus/auxiliary: not in enabled drivers build config 00:01:45.089 bus/cdx: not in enabled drivers build config 00:01:45.089 bus/dpaa: not in enabled drivers build config 00:01:45.089 bus/fslmc: not in enabled drivers build config 00:01:45.089 bus/ifpga: not in enabled drivers build config 00:01:45.089 bus/platform: not in enabled drivers build config 00:01:45.089 bus/uacce: not in enabled drivers build config 00:01:45.089 bus/vmbus: not in enabled drivers build config 00:01:45.089 common/cnxk: not in enabled drivers build config 00:01:45.089 common/mlx5: not in enabled drivers build config 00:01:45.089 common/nfp: not in enabled drivers build config 00:01:45.089 common/nitrox: not in enabled drivers build config 00:01:45.089 common/qat: not in enabled drivers build config 00:01:45.089 common/sfc_efx: not in enabled drivers build config 00:01:45.089 mempool/bucket: not in enabled drivers build config 00:01:45.089 mempool/cnxk: not in enabled drivers build config 00:01:45.089 mempool/dpaa: not in enabled drivers build config 00:01:45.089 mempool/dpaa2: not in enabled drivers build config 00:01:45.089 mempool/octeontx: not in enabled drivers build config 00:01:45.089 mempool/stack: not in enabled drivers build config 00:01:45.089 dma/cnxk: not in enabled drivers build config 00:01:45.089 dma/dpaa: not in enabled drivers build config 00:01:45.089 dma/dpaa2: not in enabled drivers build config 00:01:45.089 dma/hisilicon: not in enabled drivers build config 00:01:45.089 dma/idxd: not in enabled drivers build config 00:01:45.089 dma/ioat: not in enabled drivers build config 00:01:45.089 dma/skeleton: not in enabled drivers build config 00:01:45.089 net/af_packet: not in enabled drivers build config 00:01:45.089 net/af_xdp: not in enabled drivers build config 00:01:45.089 net/ark: not in enabled drivers build config 00:01:45.089 net/atlantic: not in enabled drivers build config 00:01:45.089 net/avp: not in enabled drivers build config 00:01:45.089 net/axgbe: not in enabled drivers build config 00:01:45.089 net/bnx2x: not in enabled drivers build config 00:01:45.089 net/bnxt: not in enabled drivers build config 00:01:45.089 net/bonding: not in enabled drivers build config 00:01:45.089 net/cnxk: not in enabled drivers build config 00:01:45.089 net/cpfl: not in enabled drivers build config 00:01:45.089 net/cxgbe: not in enabled drivers build config 00:01:45.089 net/dpaa: not in enabled drivers build config 00:01:45.089 net/dpaa2: not in enabled drivers build config 00:01:45.089 net/e1000: not in enabled drivers build config 00:01:45.089 net/ena: not in enabled drivers build config 00:01:45.089 net/enetc: not in enabled drivers build config 00:01:45.089 net/enetfec: not in enabled drivers build config 00:01:45.089 net/enic: not in enabled drivers build config 00:01:45.089 net/failsafe: not in enabled drivers build config 00:01:45.089 net/fm10k: not in enabled drivers build config 00:01:45.089 net/gve: not in enabled drivers build config 00:01:45.089 net/hinic: not in enabled drivers build config 00:01:45.089 net/hns3: not in enabled drivers build config 00:01:45.089 net/iavf: not in enabled drivers build config 00:01:45.089 net/ice: not in enabled drivers build config 00:01:45.089 net/idpf: not in enabled drivers build config 00:01:45.089 net/igc: not in enabled drivers build config 00:01:45.089 net/ionic: not in enabled drivers build config 00:01:45.089 net/ipn3ke: not in enabled drivers build config 00:01:45.089 net/ixgbe: not in enabled drivers build config 00:01:45.089 net/mana: not in enabled drivers build config 00:01:45.089 net/memif: not in enabled drivers build config 00:01:45.089 net/mlx4: not in enabled drivers build config 00:01:45.089 net/mlx5: not in enabled drivers build config 00:01:45.089 net/mvneta: not in enabled drivers build config 00:01:45.089 net/mvpp2: not in enabled drivers build config 00:01:45.089 net/netvsc: not in enabled drivers build config 00:01:45.089 net/nfb: not in enabled drivers build config 00:01:45.089 net/nfp: not in enabled drivers build config 00:01:45.089 net/ngbe: not in enabled drivers build config 00:01:45.089 net/null: not in enabled drivers build config 00:01:45.089 net/octeontx: not in enabled drivers build config 00:01:45.089 net/octeon_ep: not in enabled drivers build config 00:01:45.089 net/pcap: not in enabled drivers build config 00:01:45.089 net/pfe: not in enabled drivers build config 00:01:45.089 net/qede: not in enabled drivers build config 00:01:45.089 net/ring: not in enabled drivers build config 00:01:45.089 net/sfc: not in enabled drivers build config 00:01:45.089 net/softnic: not in enabled drivers build config 00:01:45.089 net/tap: not in enabled drivers build config 00:01:45.089 net/thunderx: not in enabled drivers build config 00:01:45.089 net/txgbe: not in enabled drivers build config 00:01:45.089 net/vdev_netvsc: not in enabled drivers build config 00:01:45.089 net/vhost: not in enabled drivers build config 00:01:45.089 net/virtio: not in enabled drivers build config 00:01:45.089 net/vmxnet3: not in enabled drivers build config 00:01:45.089 raw/cnxk_bphy: not in enabled drivers build config 00:01:45.089 raw/cnxk_gpio: not in enabled drivers build config 00:01:45.089 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:45.089 raw/ifpga: not in enabled drivers build config 00:01:45.089 raw/ntb: not in enabled drivers build config 00:01:45.089 raw/skeleton: not in enabled drivers build config 00:01:45.089 crypto/armv8: not in enabled drivers build config 00:01:45.089 crypto/bcmfs: not in enabled drivers build config 00:01:45.089 crypto/caam_jr: not in enabled drivers build config 00:01:45.089 crypto/ccp: not in enabled drivers build config 00:01:45.089 crypto/cnxk: not in enabled drivers build config 00:01:45.089 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.089 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.089 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.089 crypto/mlx5: not in enabled drivers build config 00:01:45.089 crypto/mvsam: not in enabled drivers build config 00:01:45.089 crypto/nitrox: not in enabled drivers build config 00:01:45.089 crypto/null: not in enabled drivers build config 00:01:45.089 crypto/octeontx: not in enabled drivers build config 00:01:45.089 crypto/openssl: not in enabled drivers build config 00:01:45.089 crypto/scheduler: not in enabled drivers build config 00:01:45.089 crypto/uadk: not in enabled drivers build config 00:01:45.089 crypto/virtio: not in enabled drivers build config 00:01:45.089 compress/isal: not in enabled drivers build config 00:01:45.089 compress/mlx5: not in enabled drivers build config 00:01:45.089 compress/nitrox: not in enabled drivers build config 00:01:45.089 compress/octeontx: not in enabled drivers build config 00:01:45.089 compress/zlib: not in enabled drivers build config 00:01:45.089 regex/mlx5: not in enabled drivers build config 00:01:45.089 regex/cn9k: not in enabled drivers build config 00:01:45.089 ml/cnxk: not in enabled drivers build config 00:01:45.089 vdpa/ifc: not in enabled drivers build config 00:01:45.089 vdpa/mlx5: not in enabled drivers build config 00:01:45.089 vdpa/nfp: not in enabled drivers build config 00:01:45.089 vdpa/sfc: not in enabled drivers build config 00:01:45.089 event/cnxk: not in enabled drivers build config 00:01:45.089 event/dlb2: not in enabled drivers build config 00:01:45.089 event/dpaa: not in enabled drivers build config 00:01:45.089 event/dpaa2: not in enabled drivers build config 00:01:45.089 event/dsw: not in enabled drivers build config 00:01:45.089 event/opdl: not in enabled drivers build config 00:01:45.089 event/skeleton: not in enabled drivers build config 00:01:45.089 event/sw: not in enabled drivers build config 00:01:45.089 event/octeontx: not in enabled drivers build config 00:01:45.089 baseband/acc: not in enabled drivers build config 00:01:45.089 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:45.089 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:45.089 baseband/la12xx: not in enabled drivers build config 00:01:45.089 baseband/null: not in enabled drivers build config 00:01:45.089 baseband/turbo_sw: not in enabled drivers build config 00:01:45.089 gpu/cuda: not in enabled drivers build config 00:01:45.089 00:01:45.089 00:01:45.089 Build targets in project: 224 00:01:45.089 00:01:45.089 DPDK 24.07.0-rc0 00:01:45.089 00:01:45.089 User defined options 00:01:45.089 libdir : lib 00:01:45.089 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.089 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:45.090 c_link_args : 00:01:45.090 enable_docs : false 00:01:45.090 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:45.090 enable_kmods : false 00:01:45.090 machine : native 00:01:45.090 tests : false 00:01:45.090 00:01:45.090 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.090 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:45.357 02:42:35 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:45.357 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:45.357 [1/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.357 [2/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.357 [3/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.357 [4/722] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:45.357 [5/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.357 [6/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.357 [7/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.616 [8/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.616 [9/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.616 [10/722] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.616 [11/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.616 [12/722] Linking static target lib/librte_kvargs.a 00:01:45.616 [13/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.616 [14/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.616 [15/722] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.616 [16/722] Linking static target lib/librte_log.a 00:01:45.876 [17/722] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:45.876 [18/722] Linking static target lib/librte_argparse.a 00:01:45.876 [19/722] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.154 [20/722] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.419 [21/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.419 [22/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.419 [23/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.419 [24/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.419 [25/722] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.419 [26/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:46.419 [27/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.419 [28/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:46.419 [29/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:46.419 [30/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.419 [31/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.419 [32/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:46.685 [33/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.685 [34/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:46.685 [35/722] Linking target lib/librte_log.so.24.2 00:01:46.685 [36/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.685 [37/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.685 [38/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.685 [39/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.685 [40/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:46.685 [41/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.685 [42/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.685 [43/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.685 [44/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.685 [45/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:46.685 [46/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.685 [47/722] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.685 [48/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.685 [49/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.685 [50/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.685 [51/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.685 [52/722] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.685 [53/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.685 [54/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:46.685 [55/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.685 [56/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.685 [57/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.685 [58/722] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:46.945 [59/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.945 [60/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:46.945 [61/722] Linking target lib/librte_kvargs.so.24.2 00:01:46.945 [62/722] Linking target lib/librte_argparse.so.24.2 00:01:46.945 [63/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.945 [64/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.205 [65/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:47.205 [66/722] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:47.205 [67/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.205 [68/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.205 [69/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.205 [70/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.468 [71/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.468 [72/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.468 [73/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.468 [74/722] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:47.468 [75/722] Linking static target lib/librte_pci.a 00:01:47.468 [76/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.729 [77/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:47.729 [78/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.729 [79/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.729 [80/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.729 [81/722] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.729 [82/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.729 [83/722] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.729 [84/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.729 [85/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.729 [86/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.729 [87/722] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.994 [88/722] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.994 [89/722] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.994 [90/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.994 [91/722] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.994 [92/722] Linking static target lib/librte_meter.a 00:01:47.994 [93/722] Linking static target lib/librte_ring.a 00:01:47.994 [94/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:47.994 [95/722] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.994 [96/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:47.994 [97/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:47.994 [98/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.994 [99/722] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.994 [100/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:47.994 [101/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:47.994 [102/722] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:47.994 [103/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.994 [104/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:47.994 [105/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.994 [106/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.994 [107/722] Linking static target lib/librte_telemetry.a 00:01:47.994 [108/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.994 [109/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:47.994 [110/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.261 [111/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.261 [112/722] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.261 [113/722] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.261 [114/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.261 [115/722] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.261 [116/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.261 [117/722] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.520 [118/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:48.520 [119/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.520 [120/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.520 [121/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.520 [122/722] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.520 [123/722] Linking static target lib/librte_net.a 00:01:48.520 [124/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.521 [125/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:48.521 [126/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.786 [127/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.786 [128/722] Linking static target lib/librte_mempool.a 00:01:48.786 [129/722] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.786 [130/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.786 [131/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.786 [132/722] Linking target lib/librte_telemetry.so.24.2 00:01:48.786 [133/722] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.786 [134/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:48.786 [135/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.786 [136/722] Linking static target lib/librte_eal.a 00:01:49.045 [137/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.045 [138/722] Linking static target lib/librte_cmdline.a 00:01:49.045 [139/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.045 [140/722] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:49.045 [141/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:49.045 [142/722] Linking static target lib/librte_cfgfile.a 00:01:49.045 [143/722] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:49.045 [144/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.045 [145/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:49.045 [146/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.045 [147/722] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.045 [148/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:49.307 [149/722] Linking static target lib/librte_metrics.a 00:01:49.307 [150/722] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:49.307 [151/722] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:49.307 [152/722] Linking static target lib/librte_rcu.a 00:01:49.307 [153/722] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:49.307 [154/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:49.307 [155/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:49.567 [156/722] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:49.567 [157/722] Linking static target lib/librte_bitratestats.a 00:01:49.567 [158/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:49.567 [159/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:49.567 [160/722] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.567 [161/722] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:49.567 [162/722] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.830 [163/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.830 [164/722] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.830 [165/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:49.830 [166/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:49.830 [167/722] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.830 [168/722] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:49.830 [169/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:49.830 [170/722] Linking static target lib/librte_timer.a 00:01:49.830 [171/722] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.830 [172/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:50.094 [173/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:50.094 [174/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:50.094 [175/722] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:50.094 [176/722] Linking static target lib/librte_bbdev.a 00:01:50.094 [177/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:50.094 [178/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.094 [179/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:50.094 [180/722] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.357 [181/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:50.357 [182/722] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.357 [183/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:50.357 [184/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:50.357 [185/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:50.357 [186/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:50.357 [187/722] Linking static target lib/librte_compressdev.a 00:01:50.618 [188/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:50.878 [189/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:50.878 [190/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:50.878 [191/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:50.878 [192/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:50.878 [193/722] Linking static target lib/librte_distributor.a 00:01:51.144 [194/722] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.144 [195/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:51.144 [196/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:51.144 [197/722] Linking static target lib/librte_dmadev.a 00:01:51.144 [198/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:51.144 [199/722] Linking static target lib/librte_bpf.a 00:01:51.144 [200/722] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.144 [201/722] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:51.144 [202/722] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:51.404 [203/722] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:51.404 [204/722] Linking static target lib/librte_dispatcher.a 00:01:51.404 [205/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:51.404 [206/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:51.404 [207/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:51.404 [208/722] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.404 [209/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:51.404 [210/722] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:51.404 [211/722] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:51.404 [212/722] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:51.404 [213/722] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:51.404 [214/722] Linking static target lib/librte_gpudev.a 00:01:51.404 [215/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:51.404 [216/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:51.404 [217/722] Linking static target lib/librte_gro.a 00:01:51.404 [218/722] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.667 [219/722] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.667 [220/722] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.667 [221/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:51.667 [222/722] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.667 [223/722] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:51.667 [224/722] Linking static target lib/librte_jobstats.a 00:01:51.929 [225/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:51.929 [226/722] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.929 [227/722] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:51.929 [228/722] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.929 [229/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:52.192 [230/722] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.192 [231/722] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:52.192 [232/722] Linking static target lib/librte_latencystats.a 00:01:52.192 [233/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:52.192 [234/722] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.192 [235/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:52.192 [236/722] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:52.192 [237/722] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:52.192 [238/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:52.452 [239/722] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:52.452 [240/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:52.452 [241/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:52.452 [242/722] Linking static target lib/librte_ip_frag.a 00:01:52.452 [243/722] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.452 [244/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:52.452 [245/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:52.452 [246/722] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:52.719 [247/722] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.719 [248/722] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.719 [249/722] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.978 [250/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:52.978 [251/722] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.978 [252/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:52.978 [253/722] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.979 [254/722] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:52.979 [255/722] Linking static target lib/librte_gso.a 00:01:53.243 [256/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:53.243 [257/722] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.243 [258/722] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:53.243 [259/722] Linking static target lib/librte_regexdev.a 00:01:53.243 [260/722] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:53.243 [261/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:53.243 [262/722] Linking static target lib/librte_rawdev.a 00:01:53.243 [263/722] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:53.243 [264/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:53.243 [265/722] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:53.243 [266/722] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.243 [267/722] Linking static target lib/librte_efd.a 00:01:53.505 [268/722] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:53.505 [269/722] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:53.505 [270/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:53.505 [271/722] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:53.505 [272/722] Linking static target lib/librte_pcapng.a 00:01:53.505 [273/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:53.505 [274/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:53.505 [275/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:53.505 [276/722] Linking static target lib/librte_mldev.a 00:01:53.505 [277/722] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:53.765 [278/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:53.765 [279/722] Linking static target lib/librte_stack.a 00:01:53.765 [280/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:53.765 [281/722] Linking static target lib/librte_lpm.a 00:01:53.765 [282/722] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.765 [283/722] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:53.765 [284/722] Linking static target lib/acl/libavx2_tmp.a 00:01:53.765 [285/722] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.765 [286/722] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.765 [287/722] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:54.026 [288/722] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.026 [289/722] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.026 [290/722] Linking static target lib/librte_hash.a 00:01:54.026 [291/722] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:54.026 [292/722] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.026 [293/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:54.026 [294/722] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.026 [295/722] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:54.026 [296/722] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:54.026 [297/722] Linking static target lib/acl/libavx512_tmp.a 00:01:54.289 [298/722] Linking static target lib/librte_acl.a 00:01:54.289 [299/722] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.289 [300/722] Linking static target lib/librte_reorder.a 00:01:54.289 [301/722] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.289 [302/722] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.289 [303/722] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:54.289 [304/722] Linking static target lib/librte_power.a 00:01:54.289 [305/722] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.289 [306/722] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.289 [307/722] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.289 [308/722] Linking static target lib/librte_security.a 00:01:54.551 [309/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:54.551 [310/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:54.551 [311/722] Linking static target lib/librte_mbuf.a 00:01:54.551 [312/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.551 [313/722] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.551 [314/722] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.813 [315/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:54.813 [316/722] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.813 [317/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:54.813 [318/722] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.813 [319/722] Linking static target lib/librte_rib.a 00:01:54.813 [320/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:54.813 [321/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:54.814 [322/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:54.814 [323/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:54.814 [324/722] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.077 [325/722] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:55.077 [326/722] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.077 [327/722] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:55.077 [328/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:55.077 [329/722] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:55.077 [330/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:55.077 [331/722] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:55.077 [332/722] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:55.077 [333/722] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:55.340 [334/722] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.341 [335/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:55.341 [336/722] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.603 [337/722] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.603 [338/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:55.603 [339/722] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:55.603 [340/722] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:55.604 [341/722] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:55.866 [342/722] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:55.866 [343/722] Linking static target lib/librte_member.a 00:01:55.866 [344/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:55.866 [345/722] Linking static target lib/librte_eventdev.a 00:01:55.866 [346/722] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.866 [347/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:56.127 [348/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.127 [349/722] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:56.127 [350/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.127 [351/722] Linking static target lib/librte_ethdev.a 00:01:56.127 [352/722] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:56.392 [353/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:56.392 [354/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:56.392 [355/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:56.392 [356/722] Linking static target lib/librte_cryptodev.a 00:01:56.392 [357/722] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:56.392 [358/722] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:56.392 [359/722] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:56.392 [360/722] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.392 [361/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:56.392 [362/722] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:56.392 [363/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:56.392 [364/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:56.392 [365/722] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:56.392 [366/722] Linking static target lib/librte_sched.a 00:01:56.656 [367/722] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:56.656 [368/722] Linking static target lib/librte_fib.a 00:01:56.656 [369/722] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:56.656 [370/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:56.656 [371/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:56.656 [372/722] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:56.656 [373/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:56.656 [374/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:56.917 [375/722] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:56.917 [376/722] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:56.917 [377/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:57.183 [378/722] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.184 [379/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:57.184 [380/722] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.184 [381/722] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:57.184 [382/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:57.444 [383/722] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:57.444 [384/722] Linking static target lib/librte_pdump.a 00:01:57.444 [385/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:57.444 [386/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:57.444 [387/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:57.444 [388/722] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:57.711 [389/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.711 [390/722] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:57.711 [391/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:57.711 [392/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:57.711 [393/722] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:57.711 [394/722] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:57.711 [395/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.711 [396/722] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:57.711 [397/722] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:57.711 [398/722] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.711 [399/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:57.975 [400/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:57.975 [401/722] Linking static target lib/librte_ipsec.a 00:01:57.975 [402/722] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:58.241 [403/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:58.241 [404/722] Linking static target lib/librte_table.a 00:01:58.241 [405/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:58.241 [406/722] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.241 [407/722] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:58.502 [408/722] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:58.502 [409/722] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:58.502 [410/722] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.767 [411/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:58.767 [412/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.767 [413/722] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:58.767 [414/722] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:58.767 [415/722] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:59.028 [416/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.028 [417/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:59.028 [418/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.028 [419/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:59.028 [420/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.028 [421/722] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.028 [422/722] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:59.292 [423/722] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.292 [424/722] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.292 [425/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:59.292 [426/722] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:59.560 [427/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.560 [428/722] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.560 [429/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:59.560 [430/722] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.560 [431/722] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.560 [432/722] Linking static target drivers/librte_bus_vdev.a 00:01:59.560 [433/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:59.560 [434/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:59.560 [435/722] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.823 [436/722] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:59.823 [437/722] Linking static target lib/librte_port.a 00:01:59.823 [438/722] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.823 [439/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:59.823 [440/722] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.823 [441/722] Linking static target lib/librte_graph.a 00:01:59.823 [442/722] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.823 [443/722] Linking static target drivers/librte_bus_pci.a 00:02:00.084 [444/722] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.084 [445/722] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:00.084 [446/722] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.084 [447/722] Linking target lib/librte_eal.so.24.2 00:02:00.084 [448/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:00.084 [449/722] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:00.348 [450/722] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:00.348 [451/722] Linking target lib/librte_ring.so.24.2 00:02:00.348 [452/722] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:00.348 [453/722] Linking target lib/librte_meter.so.24.2 00:02:00.611 [454/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:00.611 [455/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:00.611 [456/722] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.611 [457/722] Linking target lib/librte_pci.so.24.2 00:02:00.611 [458/722] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:00.611 [459/722] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:00.611 [460/722] Linking target lib/librte_timer.so.24.2 00:02:00.611 [461/722] Linking target lib/librte_acl.so.24.2 00:02:00.611 [462/722] Linking target lib/librte_rcu.so.24.2 00:02:00.612 [463/722] Linking target lib/librte_cfgfile.so.24.2 00:02:00.612 [464/722] Linking target lib/librte_mempool.so.24.2 00:02:00.612 [465/722] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:00.903 [466/722] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.903 [467/722] Linking target lib/librte_jobstats.so.24.2 00:02:00.903 [468/722] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:00.903 [469/722] Linking target lib/librte_dmadev.so.24.2 00:02:00.903 [470/722] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:00.903 [471/722] Linking target lib/librte_stack.so.24.2 00:02:00.903 [472/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:00.903 [473/722] Linking target lib/librte_rawdev.so.24.2 00:02:00.904 [474/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:00.904 [475/722] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:00.904 [476/722] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:00.904 [477/722] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:00.904 [478/722] Linking target drivers/librte_bus_pci.so.24.2 00:02:00.904 [479/722] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:00.904 [480/722] Linking target drivers/librte_bus_vdev.so.24.2 00:02:00.904 [481/722] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:00.904 [482/722] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.904 [483/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:00.904 [484/722] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:00.904 [485/722] Linking target lib/librte_mbuf.so.24.2 00:02:01.180 [486/722] Linking target lib/librte_rib.so.24.2 00:02:01.180 [487/722] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:01.180 [488/722] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:01.180 [489/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:01.180 [490/722] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:01.180 [491/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:01.180 [492/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:01.180 [493/722] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:02:01.180 [494/722] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:02:01.180 [495/722] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.180 [496/722] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:01.180 [497/722] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:01.180 [498/722] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.439 [499/722] Linking target lib/librte_bbdev.so.24.2 00:02:01.439 [500/722] Linking target lib/librte_net.so.24.2 00:02:01.439 [501/722] Linking target lib/librte_compressdev.so.24.2 00:02:01.439 [502/722] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:01.439 [503/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:01.439 [504/722] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:01.439 [505/722] Linking target lib/librte_gpudev.so.24.2 00:02:01.439 [506/722] Linking target lib/librte_distributor.so.24.2 00:02:01.439 [507/722] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:01.440 [508/722] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:01.440 [509/722] Linking target lib/librte_cryptodev.so.24.2 00:02:01.440 [510/722] Linking target lib/librte_regexdev.so.24.2 00:02:01.440 [511/722] Linking static target drivers/librte_mempool_ring.a 00:02:01.440 [512/722] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.440 [513/722] Linking target lib/librte_reorder.so.24.2 00:02:01.440 [514/722] Linking target lib/librte_mldev.so.24.2 00:02:01.440 [515/722] Linking target lib/librte_sched.so.24.2 00:02:01.440 [516/722] Linking target lib/librte_fib.so.24.2 00:02:01.440 [517/722] Linking target drivers/librte_mempool_ring.so.24.2 00:02:01.710 [518/722] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:01.710 [519/722] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:01.710 [520/722] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:01.710 [521/722] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:01.710 [522/722] Linking target lib/librte_cmdline.so.24.2 00:02:01.710 [523/722] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:01.710 [524/722] Linking target lib/librte_hash.so.24.2 00:02:01.710 [525/722] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:01.710 [526/722] Linking target lib/librte_security.so.24.2 00:02:01.710 [527/722] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:01.971 [528/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:01.971 [529/722] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:01.971 [530/722] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:01.971 [531/722] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:01.971 [532/722] Linking target lib/librte_efd.so.24.2 00:02:02.235 [533/722] Linking target lib/librte_lpm.so.24.2 00:02:02.235 [534/722] Linking target lib/librte_member.so.24.2 00:02:02.235 [535/722] Linking target lib/librte_ipsec.so.24.2 00:02:02.235 [536/722] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:02.235 [537/722] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:02.235 [538/722] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:02.235 [539/722] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:02.235 [540/722] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:02.235 [541/722] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:02.498 [542/722] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:02.498 [543/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:02.498 [544/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:02.498 [545/722] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:02.498 [546/722] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:02.498 [547/722] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:02.498 [548/722] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:02.759 [549/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:02.759 [550/722] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:02.759 [551/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:02.759 [552/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:03.022 [553/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:03.022 [554/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:03.022 [555/722] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:03.022 [556/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:03.288 [557/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:03.288 [558/722] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:03.288 [559/722] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:03.288 [560/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:03.288 [561/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:03.547 [562/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:03.547 [563/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:03.547 [564/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:03.810 [565/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:03.810 [566/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:03.810 [567/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:03.810 [568/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:03.810 [569/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:04.077 [570/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:04.077 [571/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:04.338 [572/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:04.338 [573/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:04.603 [574/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:04.603 [575/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:04.603 [576/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:04.603 [577/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:04.603 [578/722] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:04.868 [579/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:04.868 [580/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:04.868 [581/722] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:04.868 [582/722] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.868 [583/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:05.132 [584/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:05.132 [585/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:05.132 [586/722] Linking target lib/librte_ethdev.so.24.2 00:02:05.132 [587/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:05.132 [588/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:05.132 [589/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:05.132 [590/722] Linking static target lib/librte_pdcp.a 00:02:05.132 [591/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:05.398 [592/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:05.398 [593/722] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:05.398 [594/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:05.398 [595/722] Linking target lib/librte_metrics.so.24.2 00:02:05.398 [596/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:05.398 [597/722] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:05.398 [598/722] Linking target lib/librte_bpf.so.24.2 00:02:05.398 [599/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:05.662 [600/722] Linking target lib/librte_gro.so.24.2 00:02:05.662 [601/722] Linking target lib/librte_eventdev.so.24.2 00:02:05.662 [602/722] Linking target lib/librte_gso.so.24.2 00:02:05.662 [603/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:05.662 [604/722] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:05.662 [605/722] Linking target lib/librte_ip_frag.so.24.2 00:02:05.662 [606/722] Linking target lib/librte_pcapng.so.24.2 00:02:05.662 [607/722] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:05.662 [608/722] Linking target lib/librte_bitratestats.so.24.2 00:02:05.662 [609/722] Linking target lib/librte_power.so.24.2 00:02:05.662 [610/722] Linking target lib/librte_latencystats.so.24.2 00:02:05.662 [611/722] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.662 [612/722] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:05.926 [613/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:05.926 [614/722] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:05.926 [615/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:05.926 [616/722] Linking target lib/librte_dispatcher.so.24.2 00:02:05.926 [617/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:05.927 [618/722] Linking target lib/librte_pdcp.so.24.2 00:02:05.927 [619/722] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:05.927 [620/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:05.927 [621/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:05.927 [622/722] Linking target lib/librte_port.so.24.2 00:02:05.927 [623/722] Linking target lib/librte_pdump.so.24.2 00:02:05.927 [624/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:05.927 [625/722] Linking target lib/librte_graph.so.24.2 00:02:06.186 [626/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:06.186 [627/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:06.186 [628/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:06.186 [629/722] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:06.186 [630/722] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:06.186 [631/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:06.455 [632/722] Linking target lib/librte_table.so.24.2 00:02:06.455 [633/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:06.455 [634/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:06.455 [635/722] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:06.455 [636/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:06.455 [637/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:06.455 [638/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:06.717 [639/722] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:06.717 [640/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:06.978 [641/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:06.978 [642/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:07.239 [643/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:07.239 [644/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:07.239 [645/722] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:07.239 [646/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:07.239 [647/722] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:07.239 [648/722] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:07.239 [649/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:07.497 [650/722] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:07.497 [651/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:07.497 [652/722] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:07.497 [653/722] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:07.755 [654/722] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:07.755 [655/722] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:07.755 [656/722] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:07.755 [657/722] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:07.755 [658/722] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:08.013 [659/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:08.013 [660/722] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:08.013 [661/722] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:08.013 [662/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:08.013 [663/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:08.271 [664/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:08.529 [665/722] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:08.529 [666/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:08.529 [667/722] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:08.529 [668/722] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:08.529 [669/722] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:08.788 [670/722] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:09.046 [671/722] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:09.046 [672/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:09.046 [673/722] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:09.046 [674/722] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:09.046 [675/722] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:09.046 [676/722] Linking static target drivers/librte_net_i40e.a 00:02:09.046 [677/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:09.304 [678/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:09.562 [679/722] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.562 [680/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:09.562 [681/722] Linking target drivers/librte_net_i40e.so.24.2 00:02:09.820 [682/722] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:10.079 [683/722] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:10.079 [684/722] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:10.337 [685/722] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:10.337 [686/722] Linking static target lib/librte_node.a 00:02:10.596 [687/722] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.596 [688/722] Linking target lib/librte_node.so.24.2 00:02:11.530 [689/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:11.530 [690/722] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:12.097 [691/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:13.470 [692/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:14.036 [693/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:19.326 [694/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.392 [695/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:51.392 [696/722] Linking static target lib/librte_vhost.a 00:02:51.392 [697/722] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.392 [698/722] Linking target lib/librte_vhost.so.24.2 00:03:03.613 [699/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:03.613 [700/722] Linking static target lib/librte_pipeline.a 00:03:04.181 [701/722] Linking target app/dpdk-pdump 00:03:04.181 [702/722] Linking target app/dpdk-proc-info 00:03:04.181 [703/722] Linking target app/dpdk-dumpcap 00:03:04.181 [704/722] Linking target app/dpdk-test-sad 00:03:04.181 [705/722] Linking target app/dpdk-test-acl 00:03:04.181 [706/722] Linking target app/dpdk-test-cmdline 00:03:04.181 [707/722] Linking target app/dpdk-test-dma-perf 00:03:04.181 [708/722] Linking target app/dpdk-test-pipeline 00:03:04.181 [709/722] Linking target app/dpdk-test-gpudev 00:03:04.181 [710/722] Linking target app/dpdk-test-fib 00:03:04.181 [711/722] Linking target app/dpdk-test-regex 00:03:04.181 [712/722] Linking target app/dpdk-test-flow-perf 00:03:04.181 [713/722] Linking target app/dpdk-test-security-perf 00:03:04.181 [714/722] Linking target app/dpdk-graph 00:03:04.181 [715/722] Linking target app/dpdk-test-mldev 00:03:04.181 [716/722] Linking target app/dpdk-test-bbdev 00:03:04.181 [717/722] Linking target app/dpdk-test-crypto-perf 00:03:04.181 [718/722] Linking target app/dpdk-test-compress-perf 00:03:04.181 [719/722] Linking target app/dpdk-test-eventdev 00:03:04.181 [720/722] Linking target app/dpdk-testpmd 00:03:06.085 [721/722] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.349 [722/722] Linking target lib/librte_pipeline.so.24.2 00:03:06.349 02:43:56 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:06.349 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:06.349 [0/1] Installing files. 00:03:06.614 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:06.614 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.614 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.614 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.615 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:06.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.617 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.618 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.619 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:06.620 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:06.620 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.620 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.621 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.189 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:07.190 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:07.190 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:07.190 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.190 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:03:07.190 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:07.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:07.454 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:07.454 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:07.454 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:07.454 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:07.454 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:03:07.454 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:03:07.454 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:07.454 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:07.454 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:07.454 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:07.454 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:07.454 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:07.454 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:07.454 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:07.454 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:07.454 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:07.454 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:07.454 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:07.454 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:07.454 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:07.454 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:07.454 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:07.454 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:07.454 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:07.454 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:07.454 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:07.454 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:07.454 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:07.454 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:07.454 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:07.454 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:07.454 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:07.454 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:07.454 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:07.454 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:07.454 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:07.454 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:07.454 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:07.454 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:07.454 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:07.454 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:07.454 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:07.454 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:07.454 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:07.454 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:07.454 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:07.454 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:07.454 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:07.454 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:07.454 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:07.454 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:07.454 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:07.454 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:07.454 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:07.454 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:07.454 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:07.454 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:07.454 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:07.454 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:07.454 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:07.454 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:07.454 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:07.454 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:07.454 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:07.454 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:07.454 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:07.454 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:07.454 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:07.454 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:07.454 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:07.454 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:07.454 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:07.454 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:07.454 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:07.454 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:07.454 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:07.454 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:07.454 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:07.454 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:07.454 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:07.454 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:07.455 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:07.455 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:07.455 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:07.455 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:07.455 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:07.455 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:07.455 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:07.455 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:07.455 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:07.455 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:07.455 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:07.455 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:07.455 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:07.455 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:07.455 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:07.455 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:07.455 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:07.455 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:07.455 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:07.455 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:07.455 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:07.455 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:07.455 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:07.455 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:07.455 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:07.455 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:07.455 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:07.455 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:07.455 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:07.455 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:07.455 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:07.455 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:07.455 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:07.455 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:07.455 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:07.455 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:07.455 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:07.455 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:07.455 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:07.455 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:07.455 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:07.455 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:07.455 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:07.455 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:07.455 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:07.455 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:07.455 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:07.455 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:07.455 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:07.455 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:07.455 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:07.455 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:07.455 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:07.455 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:07.455 02:43:58 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:07.455 02:43:58 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:07.455 02:43:58 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:07.455 02:43:58 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.455 00:03:07.455 real 1m28.024s 00:03:07.455 user 18m18.481s 00:03:07.455 sys 2m10.114s 00:03:07.455 02:43:58 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:07.455 02:43:58 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:07.455 ************************************ 00:03:07.455 END TEST build_native_dpdk 00:03:07.455 ************************************ 00:03:07.455 02:43:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:07.455 02:43:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:07.455 02:43:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:07.455 02:43:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:07.455 02:43:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:07.455 02:43:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:07.455 02:43:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:07.455 02:43:58 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:07.455 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:07.713 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.713 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.714 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:07.978 Using 'verbs' RDMA provider 00:03:18.559 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:26.678 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:26.936 Creating mk/config.mk...done. 00:03:26.936 Creating mk/cc.flags.mk...done. 00:03:26.936 Type 'make' to build. 00:03:26.936 02:44:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:26.936 02:44:17 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:26.936 02:44:17 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:26.936 02:44:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.936 ************************************ 00:03:26.936 START TEST make 00:03:26.936 ************************************ 00:03:26.936 02:44:17 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:27.196 make[1]: Nothing to be done for 'all'. 00:03:28.582 The Meson build system 00:03:28.582 Version: 1.3.1 00:03:28.582 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:28.582 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:28.582 Build type: native build 00:03:28.582 Project name: libvfio-user 00:03:28.582 Project version: 0.0.1 00:03:28.582 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:28.582 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:28.582 Host machine cpu family: x86_64 00:03:28.582 Host machine cpu: x86_64 00:03:28.582 Run-time dependency threads found: YES 00:03:28.582 Library dl found: YES 00:03:28.582 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:28.582 Run-time dependency json-c found: YES 0.17 00:03:28.582 Run-time dependency cmocka found: YES 1.1.7 00:03:28.582 Program pytest-3 found: NO 00:03:28.582 Program flake8 found: NO 00:03:28.582 Program misspell-fixer found: NO 00:03:28.582 Program restructuredtext-lint found: NO 00:03:28.582 Program valgrind found: YES (/usr/bin/valgrind) 00:03:28.582 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:28.582 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:28.582 Compiler for C supports arguments -Wwrite-strings: YES 00:03:28.582 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:28.582 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:28.582 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:28.582 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:28.582 Build targets in project: 8 00:03:28.582 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:28.582 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:28.582 00:03:28.582 libvfio-user 0.0.1 00:03:28.582 00:03:28.582 User defined options 00:03:28.582 buildtype : debug 00:03:28.582 default_library: shared 00:03:28.582 libdir : /usr/local/lib 00:03:28.582 00:03:28.582 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:29.530 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:29.530 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:29.530 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:29.793 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:29.793 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:29.793 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:29.793 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:29.793 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:29.793 [8/37] Compiling C object samples/null.p/null.c.o 00:03:29.793 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:29.793 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:29.793 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:29.793 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:29.793 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:29.793 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:29.793 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:29.793 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:29.793 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:29.793 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:29.793 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:29.793 [20/37] Compiling C object samples/server.p/server.c.o 00:03:29.793 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:29.793 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:29.793 [23/37] Compiling C object samples/client.p/client.c.o 00:03:29.793 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:29.793 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:30.058 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:30.058 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:30.058 [28/37] Linking target samples/client 00:03:30.058 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:30.058 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:30.058 [31/37] Linking target test/unit_tests 00:03:30.319 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:30.319 [33/37] Linking target samples/server 00:03:30.319 [34/37] Linking target samples/gpio-pci-idio-16 00:03:30.319 [35/37] Linking target samples/lspci 00:03:30.319 [36/37] Linking target samples/null 00:03:30.319 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:30.319 INFO: autodetecting backend as ninja 00:03:30.319 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:30.319 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:31.265 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:31.265 ninja: no work to do. 00:03:43.466 CC lib/log/log.o 00:03:43.466 CC lib/log/log_flags.o 00:03:43.466 CC lib/ut/ut.o 00:03:43.466 CC lib/log/log_deprecated.o 00:03:43.466 CC lib/ut_mock/mock.o 00:03:43.466 LIB libspdk_ut_mock.a 00:03:43.466 SO libspdk_ut_mock.so.6.0 00:03:43.466 LIB libspdk_log.a 00:03:43.466 LIB libspdk_ut.a 00:03:43.466 SO libspdk_ut.so.2.0 00:03:43.466 SO libspdk_log.so.7.0 00:03:43.466 SYMLINK libspdk_ut_mock.so 00:03:43.466 SYMLINK libspdk_ut.so 00:03:43.466 SYMLINK libspdk_log.so 00:03:43.466 CC lib/dma/dma.o 00:03:43.466 CC lib/ioat/ioat.o 00:03:43.466 CXX lib/trace_parser/trace.o 00:03:43.466 CC lib/util/base64.o 00:03:43.466 CC lib/util/bit_array.o 00:03:43.466 CC lib/util/cpuset.o 00:03:43.466 CC lib/util/crc16.o 00:03:43.466 CC lib/util/crc32.o 00:03:43.466 CC lib/util/crc32c.o 00:03:43.466 CC lib/util/crc32_ieee.o 00:03:43.466 CC lib/util/crc64.o 00:03:43.466 CC lib/util/dif.o 00:03:43.466 CC lib/util/fd.o 00:03:43.466 CC lib/util/file.o 00:03:43.466 CC lib/util/hexlify.o 00:03:43.466 CC lib/util/iov.o 00:03:43.466 CC lib/util/math.o 00:03:43.466 CC lib/util/pipe.o 00:03:43.466 CC lib/util/strerror_tls.o 00:03:43.466 CC lib/util/string.o 00:03:43.466 CC lib/util/uuid.o 00:03:43.466 CC lib/util/fd_group.o 00:03:43.466 CC lib/util/xor.o 00:03:43.466 CC lib/util/zipf.o 00:03:43.466 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.466 CC lib/vfio_user/host/vfio_user.o 00:03:43.466 LIB libspdk_dma.a 00:03:43.466 SO libspdk_dma.so.4.0 00:03:43.466 SYMLINK libspdk_dma.so 00:03:43.466 LIB libspdk_ioat.a 00:03:43.466 SO libspdk_ioat.so.7.0 00:03:43.466 SYMLINK libspdk_ioat.so 00:03:43.466 LIB libspdk_vfio_user.a 00:03:43.466 SO libspdk_vfio_user.so.5.0 00:03:43.727 SYMLINK libspdk_vfio_user.so 00:03:43.727 LIB libspdk_util.a 00:03:43.727 SO libspdk_util.so.9.0 00:03:43.986 SYMLINK libspdk_util.so 00:03:43.986 CC lib/json/json_parse.o 00:03:43.986 CC lib/idxd/idxd.o 00:03:43.986 CC lib/json/json_util.o 00:03:43.986 CC lib/vmd/vmd.o 00:03:43.986 CC lib/rdma/common.o 00:03:43.986 CC lib/conf/conf.o 00:03:43.986 CC lib/idxd/idxd_user.o 00:03:43.986 CC lib/env_dpdk/env.o 00:03:43.986 CC lib/json/json_write.o 00:03:43.986 CC lib/vmd/led.o 00:03:43.986 CC lib/rdma/rdma_verbs.o 00:03:43.986 CC lib/env_dpdk/memory.o 00:03:43.986 CC lib/env_dpdk/pci.o 00:03:43.986 CC lib/env_dpdk/init.o 00:03:43.986 CC lib/env_dpdk/threads.o 00:03:43.986 CC lib/env_dpdk/pci_ioat.o 00:03:43.986 CC lib/env_dpdk/pci_virtio.o 00:03:43.986 CC lib/env_dpdk/pci_vmd.o 00:03:43.986 CC lib/env_dpdk/pci_idxd.o 00:03:43.986 CC lib/env_dpdk/pci_event.o 00:03:43.986 CC lib/env_dpdk/sigbus_handler.o 00:03:43.986 CC lib/env_dpdk/pci_dpdk.o 00:03:43.986 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:43.986 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:43.986 LIB libspdk_trace_parser.a 00:03:44.244 SO libspdk_trace_parser.so.5.0 00:03:44.244 SYMLINK libspdk_trace_parser.so 00:03:44.505 LIB libspdk_conf.a 00:03:44.505 LIB libspdk_rdma.a 00:03:44.505 LIB libspdk_json.a 00:03:44.505 SO libspdk_conf.so.6.0 00:03:44.505 SO libspdk_rdma.so.6.0 00:03:44.505 SO libspdk_json.so.6.0 00:03:44.505 SYMLINK libspdk_conf.so 00:03:44.505 SYMLINK libspdk_rdma.so 00:03:44.505 SYMLINK libspdk_json.so 00:03:44.505 LIB libspdk_idxd.a 00:03:44.505 CC lib/jsonrpc/jsonrpc_server.o 00:03:44.505 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:44.505 CC lib/jsonrpc/jsonrpc_client.o 00:03:44.505 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:44.763 SO libspdk_idxd.so.12.0 00:03:44.763 SYMLINK libspdk_idxd.so 00:03:44.763 LIB libspdk_vmd.a 00:03:44.763 SO libspdk_vmd.so.6.0 00:03:44.763 SYMLINK libspdk_vmd.so 00:03:45.021 LIB libspdk_jsonrpc.a 00:03:45.021 SO libspdk_jsonrpc.so.6.0 00:03:45.021 SYMLINK libspdk_jsonrpc.so 00:03:45.279 CC lib/rpc/rpc.o 00:03:45.279 LIB libspdk_rpc.a 00:03:45.538 SO libspdk_rpc.so.6.0 00:03:45.538 SYMLINK libspdk_rpc.so 00:03:45.538 CC lib/keyring/keyring.o 00:03:45.538 CC lib/trace/trace.o 00:03:45.538 CC lib/trace/trace_flags.o 00:03:45.538 CC lib/keyring/keyring_rpc.o 00:03:45.538 CC lib/trace/trace_rpc.o 00:03:45.538 CC lib/notify/notify.o 00:03:45.538 CC lib/notify/notify_rpc.o 00:03:45.797 LIB libspdk_notify.a 00:03:45.797 SO libspdk_notify.so.6.0 00:03:45.797 LIB libspdk_keyring.a 00:03:45.797 SYMLINK libspdk_notify.so 00:03:45.797 LIB libspdk_trace.a 00:03:45.797 SO libspdk_keyring.so.1.0 00:03:45.797 SO libspdk_trace.so.10.0 00:03:46.055 SYMLINK libspdk_keyring.so 00:03:46.055 SYMLINK libspdk_trace.so 00:03:46.055 CC lib/thread/thread.o 00:03:46.055 CC lib/thread/iobuf.o 00:03:46.055 CC lib/sock/sock.o 00:03:46.056 CC lib/sock/sock_rpc.o 00:03:46.314 LIB libspdk_env_dpdk.a 00:03:46.314 SO libspdk_env_dpdk.so.14.0 00:03:46.314 SYMLINK libspdk_env_dpdk.so 00:03:46.573 LIB libspdk_sock.a 00:03:46.573 SO libspdk_sock.so.9.0 00:03:46.573 SYMLINK libspdk_sock.so 00:03:46.831 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:46.831 CC lib/nvme/nvme_ctrlr.o 00:03:46.831 CC lib/nvme/nvme_fabric.o 00:03:46.831 CC lib/nvme/nvme_ns_cmd.o 00:03:46.831 CC lib/nvme/nvme_ns.o 00:03:46.831 CC lib/nvme/nvme_pcie_common.o 00:03:46.831 CC lib/nvme/nvme_pcie.o 00:03:46.831 CC lib/nvme/nvme_qpair.o 00:03:46.831 CC lib/nvme/nvme.o 00:03:46.831 CC lib/nvme/nvme_quirks.o 00:03:46.831 CC lib/nvme/nvme_transport.o 00:03:46.831 CC lib/nvme/nvme_discovery.o 00:03:46.831 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:46.831 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:46.831 CC lib/nvme/nvme_tcp.o 00:03:46.831 CC lib/nvme/nvme_opal.o 00:03:46.831 CC lib/nvme/nvme_io_msg.o 00:03:46.831 CC lib/nvme/nvme_poll_group.o 00:03:46.831 CC lib/nvme/nvme_zns.o 00:03:46.831 CC lib/nvme/nvme_stubs.o 00:03:46.831 CC lib/nvme/nvme_auth.o 00:03:46.831 CC lib/nvme/nvme_cuse.o 00:03:46.831 CC lib/nvme/nvme_vfio_user.o 00:03:46.831 CC lib/nvme/nvme_rdma.o 00:03:47.766 LIB libspdk_thread.a 00:03:47.766 SO libspdk_thread.so.10.0 00:03:47.766 SYMLINK libspdk_thread.so 00:03:48.023 CC lib/blob/blobstore.o 00:03:48.023 CC lib/virtio/virtio.o 00:03:48.023 CC lib/init/json_config.o 00:03:48.023 CC lib/vfu_tgt/tgt_endpoint.o 00:03:48.023 CC lib/virtio/virtio_vhost_user.o 00:03:48.023 CC lib/accel/accel.o 00:03:48.023 CC lib/vfu_tgt/tgt_rpc.o 00:03:48.023 CC lib/init/subsystem.o 00:03:48.023 CC lib/virtio/virtio_vfio_user.o 00:03:48.023 CC lib/blob/request.o 00:03:48.023 CC lib/blob/zeroes.o 00:03:48.024 CC lib/init/subsystem_rpc.o 00:03:48.024 CC lib/virtio/virtio_pci.o 00:03:48.024 CC lib/accel/accel_rpc.o 00:03:48.024 CC lib/blob/blob_bs_dev.o 00:03:48.024 CC lib/init/rpc.o 00:03:48.024 CC lib/accel/accel_sw.o 00:03:48.281 LIB libspdk_init.a 00:03:48.281 SO libspdk_init.so.5.0 00:03:48.281 LIB libspdk_virtio.a 00:03:48.281 LIB libspdk_vfu_tgt.a 00:03:48.281 SYMLINK libspdk_init.so 00:03:48.281 SO libspdk_vfu_tgt.so.3.0 00:03:48.281 SO libspdk_virtio.so.7.0 00:03:48.540 SYMLINK libspdk_vfu_tgt.so 00:03:48.540 SYMLINK libspdk_virtio.so 00:03:48.540 CC lib/event/app.o 00:03:48.540 CC lib/event/reactor.o 00:03:48.540 CC lib/event/log_rpc.o 00:03:48.540 CC lib/event/app_rpc.o 00:03:48.540 CC lib/event/scheduler_static.o 00:03:49.107 LIB libspdk_event.a 00:03:49.107 SO libspdk_event.so.13.0 00:03:49.107 SYMLINK libspdk_event.so 00:03:49.107 LIB libspdk_accel.a 00:03:49.107 SO libspdk_accel.so.15.0 00:03:49.107 SYMLINK libspdk_accel.so 00:03:49.107 LIB libspdk_nvme.a 00:03:49.365 SO libspdk_nvme.so.13.0 00:03:49.365 CC lib/bdev/bdev.o 00:03:49.365 CC lib/bdev/bdev_rpc.o 00:03:49.365 CC lib/bdev/bdev_zone.o 00:03:49.365 CC lib/bdev/part.o 00:03:49.365 CC lib/bdev/scsi_nvme.o 00:03:49.623 SYMLINK libspdk_nvme.so 00:03:50.999 LIB libspdk_blob.a 00:03:50.999 SO libspdk_blob.so.11.0 00:03:50.999 SYMLINK libspdk_blob.so 00:03:50.999 CC lib/blobfs/blobfs.o 00:03:50.999 CC lib/blobfs/tree.o 00:03:50.999 CC lib/lvol/lvol.o 00:03:51.934 LIB libspdk_blobfs.a 00:03:51.934 LIB libspdk_bdev.a 00:03:51.934 SO libspdk_blobfs.so.10.0 00:03:51.934 SO libspdk_bdev.so.15.0 00:03:51.934 SYMLINK libspdk_blobfs.so 00:03:51.934 LIB libspdk_lvol.a 00:03:51.934 SO libspdk_lvol.so.10.0 00:03:51.934 SYMLINK libspdk_bdev.so 00:03:52.200 SYMLINK libspdk_lvol.so 00:03:52.200 CC lib/nbd/nbd.o 00:03:52.200 CC lib/scsi/dev.o 00:03:52.200 CC lib/nvmf/ctrlr.o 00:03:52.200 CC lib/nbd/nbd_rpc.o 00:03:52.200 CC lib/ublk/ublk.o 00:03:52.200 CC lib/scsi/lun.o 00:03:52.200 CC lib/nvmf/ctrlr_discovery.o 00:03:52.200 CC lib/ublk/ublk_rpc.o 00:03:52.200 CC lib/scsi/port.o 00:03:52.200 CC lib/nvmf/ctrlr_bdev.o 00:03:52.200 CC lib/ftl/ftl_core.o 00:03:52.200 CC lib/scsi/scsi.o 00:03:52.200 CC lib/nvmf/subsystem.o 00:03:52.200 CC lib/nvmf/nvmf.o 00:03:52.200 CC lib/ftl/ftl_init.o 00:03:52.200 CC lib/scsi/scsi_bdev.o 00:03:52.200 CC lib/nvmf/nvmf_rpc.o 00:03:52.200 CC lib/ftl/ftl_layout.o 00:03:52.200 CC lib/nvmf/transport.o 00:03:52.200 CC lib/scsi/scsi_pr.o 00:03:52.200 CC lib/scsi/scsi_rpc.o 00:03:52.200 CC lib/ftl/ftl_debug.o 00:03:52.200 CC lib/ftl/ftl_io.o 00:03:52.200 CC lib/nvmf/tcp.o 00:03:52.200 CC lib/scsi/task.o 00:03:52.200 CC lib/nvmf/stubs.o 00:03:52.200 CC lib/nvmf/vfio_user.o 00:03:52.200 CC lib/ftl/ftl_l2p.o 00:03:52.200 CC lib/ftl/ftl_sb.o 00:03:52.200 CC lib/nvmf/rdma.o 00:03:52.200 CC lib/ftl/ftl_nv_cache.o 00:03:52.200 CC lib/ftl/ftl_l2p_flat.o 00:03:52.200 CC lib/nvmf/auth.o 00:03:52.200 CC lib/ftl/ftl_band.o 00:03:52.200 CC lib/ftl/ftl_band_ops.o 00:03:52.200 CC lib/ftl/ftl_writer.o 00:03:52.200 CC lib/ftl/ftl_reloc.o 00:03:52.200 CC lib/ftl/ftl_rq.o 00:03:52.200 CC lib/ftl/ftl_l2p_cache.o 00:03:52.200 CC lib/ftl/ftl_p2l.o 00:03:52.200 CC lib/ftl/mngt/ftl_mngt.o 00:03:52.200 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:52.200 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:52.200 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:52.200 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:52.201 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:52.201 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:52.201 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:52.459 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:52.459 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:52.459 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:52.723 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:52.723 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:52.723 CC lib/ftl/utils/ftl_conf.o 00:03:52.723 CC lib/ftl/utils/ftl_md.o 00:03:52.723 CC lib/ftl/utils/ftl_mempool.o 00:03:52.723 CC lib/ftl/utils/ftl_bitmap.o 00:03:52.723 CC lib/ftl/utils/ftl_property.o 00:03:52.723 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:52.723 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:52.723 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:52.723 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:52.723 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:52.723 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:52.723 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:52.723 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:52.723 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:52.723 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:52.723 CC lib/ftl/base/ftl_base_dev.o 00:03:52.723 CC lib/ftl/base/ftl_base_bdev.o 00:03:52.984 CC lib/ftl/ftl_trace.o 00:03:52.984 LIB libspdk_nbd.a 00:03:52.984 SO libspdk_nbd.so.7.0 00:03:52.984 SYMLINK libspdk_nbd.so 00:03:53.256 LIB libspdk_scsi.a 00:03:53.256 SO libspdk_scsi.so.9.0 00:03:53.256 LIB libspdk_ublk.a 00:03:53.256 SO libspdk_ublk.so.3.0 00:03:53.256 SYMLINK libspdk_scsi.so 00:03:53.256 SYMLINK libspdk_ublk.so 00:03:53.521 CC lib/vhost/vhost.o 00:03:53.521 CC lib/iscsi/conn.o 00:03:53.521 CC lib/iscsi/init_grp.o 00:03:53.521 CC lib/vhost/vhost_rpc.o 00:03:53.521 CC lib/vhost/vhost_scsi.o 00:03:53.521 CC lib/iscsi/iscsi.o 00:03:53.521 CC lib/iscsi/md5.o 00:03:53.521 CC lib/vhost/vhost_blk.o 00:03:53.521 CC lib/iscsi/param.o 00:03:53.521 CC lib/vhost/rte_vhost_user.o 00:03:53.521 CC lib/iscsi/portal_grp.o 00:03:53.521 CC lib/iscsi/tgt_node.o 00:03:53.521 CC lib/iscsi/iscsi_subsystem.o 00:03:53.521 CC lib/iscsi/iscsi_rpc.o 00:03:53.521 CC lib/iscsi/task.o 00:03:53.780 LIB libspdk_ftl.a 00:03:53.780 SO libspdk_ftl.so.9.0 00:03:54.037 SYMLINK libspdk_ftl.so 00:03:54.603 LIB libspdk_vhost.a 00:03:54.603 SO libspdk_vhost.so.8.0 00:03:54.860 LIB libspdk_nvmf.a 00:03:54.860 SYMLINK libspdk_vhost.so 00:03:54.860 SO libspdk_nvmf.so.18.0 00:03:54.860 LIB libspdk_iscsi.a 00:03:54.860 SO libspdk_iscsi.so.8.0 00:03:55.118 SYMLINK libspdk_nvmf.so 00:03:55.118 SYMLINK libspdk_iscsi.so 00:03:55.376 CC module/env_dpdk/env_dpdk_rpc.o 00:03:55.376 CC module/vfu_device/vfu_virtio.o 00:03:55.376 CC module/vfu_device/vfu_virtio_blk.o 00:03:55.376 CC module/vfu_device/vfu_virtio_scsi.o 00:03:55.376 CC module/vfu_device/vfu_virtio_rpc.o 00:03:55.376 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:55.376 CC module/blob/bdev/blob_bdev.o 00:03:55.376 CC module/sock/posix/posix.o 00:03:55.376 CC module/accel/error/accel_error.o 00:03:55.376 CC module/accel/ioat/accel_ioat.o 00:03:55.376 CC module/accel/dsa/accel_dsa.o 00:03:55.376 CC module/accel/error/accel_error_rpc.o 00:03:55.376 CC module/accel/ioat/accel_ioat_rpc.o 00:03:55.376 CC module/accel/iaa/accel_iaa.o 00:03:55.376 CC module/scheduler/gscheduler/gscheduler.o 00:03:55.376 CC module/accel/dsa/accel_dsa_rpc.o 00:03:55.376 CC module/accel/iaa/accel_iaa_rpc.o 00:03:55.376 CC module/keyring/file/keyring.o 00:03:55.376 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:55.376 CC module/keyring/file/keyring_rpc.o 00:03:55.635 LIB libspdk_env_dpdk_rpc.a 00:03:55.635 SO libspdk_env_dpdk_rpc.so.6.0 00:03:55.635 SYMLINK libspdk_env_dpdk_rpc.so 00:03:55.635 LIB libspdk_scheduler_gscheduler.a 00:03:55.635 LIB libspdk_keyring_file.a 00:03:55.635 LIB libspdk_scheduler_dpdk_governor.a 00:03:55.635 SO libspdk_scheduler_gscheduler.so.4.0 00:03:55.635 SO libspdk_keyring_file.so.1.0 00:03:55.635 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:55.635 LIB libspdk_accel_error.a 00:03:55.635 LIB libspdk_accel_ioat.a 00:03:55.636 LIB libspdk_scheduler_dynamic.a 00:03:55.636 SO libspdk_accel_error.so.2.0 00:03:55.636 SO libspdk_scheduler_dynamic.so.4.0 00:03:55.636 SYMLINK libspdk_scheduler_gscheduler.so 00:03:55.636 SO libspdk_accel_ioat.so.6.0 00:03:55.636 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:55.636 SYMLINK libspdk_keyring_file.so 00:03:55.636 LIB libspdk_accel_iaa.a 00:03:55.636 LIB libspdk_accel_dsa.a 00:03:55.636 SO libspdk_accel_iaa.so.3.0 00:03:55.636 SYMLINK libspdk_accel_error.so 00:03:55.636 LIB libspdk_blob_bdev.a 00:03:55.636 SYMLINK libspdk_scheduler_dynamic.so 00:03:55.636 SO libspdk_accel_dsa.so.5.0 00:03:55.636 SYMLINK libspdk_accel_ioat.so 00:03:55.636 SO libspdk_blob_bdev.so.11.0 00:03:55.894 SYMLINK libspdk_accel_iaa.so 00:03:55.894 SYMLINK libspdk_accel_dsa.so 00:03:55.894 SYMLINK libspdk_blob_bdev.so 00:03:55.894 LIB libspdk_vfu_device.a 00:03:56.154 SO libspdk_vfu_device.so.3.0 00:03:56.154 CC module/bdev/lvol/vbdev_lvol.o 00:03:56.154 CC module/bdev/null/bdev_null.o 00:03:56.154 CC module/bdev/nvme/bdev_nvme.o 00:03:56.154 CC module/bdev/raid/bdev_raid_rpc.o 00:03:56.154 CC module/bdev/raid/bdev_raid.o 00:03:56.154 CC module/bdev/split/vbdev_split.o 00:03:56.154 CC module/bdev/error/vbdev_error.o 00:03:56.154 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:56.154 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.154 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.154 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:56.154 CC module/blobfs/bdev/blobfs_bdev.o 00:03:56.154 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.154 CC module/bdev/null/bdev_null_rpc.o 00:03:56.154 CC module/bdev/ftl/bdev_ftl.o 00:03:56.154 CC module/bdev/error/vbdev_error_rpc.o 00:03:56.154 CC module/bdev/nvme/nvme_rpc.o 00:03:56.154 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.154 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:56.154 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:56.154 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:56.154 CC module/bdev/raid/raid0.o 00:03:56.154 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:56.154 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:56.154 CC module/bdev/malloc/bdev_malloc.o 00:03:56.154 CC module/bdev/raid/raid1.o 00:03:56.154 CC module/bdev/delay/vbdev_delay.o 00:03:56.154 CC module/bdev/aio/bdev_aio.o 00:03:56.154 CC module/bdev/nvme/bdev_mdns_client.o 00:03:56.154 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:56.154 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.154 CC module/bdev/aio/bdev_aio_rpc.o 00:03:56.154 CC module/bdev/raid/concat.o 00:03:56.154 CC module/bdev/nvme/vbdev_opal.o 00:03:56.154 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:56.154 CC module/bdev/gpt/gpt.o 00:03:56.154 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:56.154 CC module/bdev/gpt/vbdev_gpt.o 00:03:56.154 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:56.154 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:56.154 CC module/bdev/passthru/vbdev_passthru.o 00:03:56.154 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:56.154 SYMLINK libspdk_vfu_device.so 00:03:56.412 LIB libspdk_sock_posix.a 00:03:56.412 SO libspdk_sock_posix.so.6.0 00:03:56.412 LIB libspdk_bdev_gpt.a 00:03:56.412 LIB libspdk_blobfs_bdev.a 00:03:56.412 SYMLINK libspdk_sock_posix.so 00:03:56.412 SO libspdk_bdev_gpt.so.6.0 00:03:56.412 SO libspdk_blobfs_bdev.so.6.0 00:03:56.412 LIB libspdk_bdev_split.a 00:03:56.412 LIB libspdk_bdev_zone_block.a 00:03:56.412 SO libspdk_bdev_split.so.6.0 00:03:56.412 SYMLINK libspdk_bdev_gpt.so 00:03:56.671 LIB libspdk_bdev_ftl.a 00:03:56.671 SO libspdk_bdev_zone_block.so.6.0 00:03:56.671 LIB libspdk_bdev_error.a 00:03:56.671 LIB libspdk_bdev_null.a 00:03:56.671 SYMLINK libspdk_blobfs_bdev.so 00:03:56.671 LIB libspdk_bdev_passthru.a 00:03:56.671 SO libspdk_bdev_ftl.so.6.0 00:03:56.671 SO libspdk_bdev_error.so.6.0 00:03:56.671 SO libspdk_bdev_null.so.6.0 00:03:56.671 SYMLINK libspdk_bdev_split.so 00:03:56.671 LIB libspdk_bdev_delay.a 00:03:56.671 LIB libspdk_bdev_aio.a 00:03:56.671 SO libspdk_bdev_passthru.so.6.0 00:03:56.671 SYMLINK libspdk_bdev_zone_block.so 00:03:56.671 SO libspdk_bdev_delay.so.6.0 00:03:56.671 SO libspdk_bdev_aio.so.6.0 00:03:56.671 SYMLINK libspdk_bdev_ftl.so 00:03:56.671 SYMLINK libspdk_bdev_error.so 00:03:56.671 LIB libspdk_bdev_iscsi.a 00:03:56.671 SYMLINK libspdk_bdev_null.so 00:03:56.671 SYMLINK libspdk_bdev_passthru.so 00:03:56.671 SO libspdk_bdev_iscsi.so.6.0 00:03:56.671 LIB libspdk_bdev_malloc.a 00:03:56.671 SYMLINK libspdk_bdev_delay.so 00:03:56.671 SYMLINK libspdk_bdev_aio.so 00:03:56.671 SO libspdk_bdev_malloc.so.6.0 00:03:56.671 SYMLINK libspdk_bdev_iscsi.so 00:03:56.671 SYMLINK libspdk_bdev_malloc.so 00:03:56.671 LIB libspdk_bdev_lvol.a 00:03:56.929 LIB libspdk_bdev_virtio.a 00:03:56.929 SO libspdk_bdev_lvol.so.6.0 00:03:56.929 SO libspdk_bdev_virtio.so.6.0 00:03:56.929 SYMLINK libspdk_bdev_lvol.so 00:03:56.929 SYMLINK libspdk_bdev_virtio.so 00:03:57.187 LIB libspdk_bdev_raid.a 00:03:57.187 SO libspdk_bdev_raid.so.6.0 00:03:57.444 SYMLINK libspdk_bdev_raid.so 00:03:58.409 LIB libspdk_bdev_nvme.a 00:03:58.409 SO libspdk_bdev_nvme.so.7.0 00:03:58.669 SYMLINK libspdk_bdev_nvme.so 00:03:58.927 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:58.927 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:58.927 CC module/event/subsystems/keyring/keyring.o 00:03:58.927 CC module/event/subsystems/scheduler/scheduler.o 00:03:58.927 CC module/event/subsystems/iobuf/iobuf.o 00:03:58.927 CC module/event/subsystems/sock/sock.o 00:03:58.927 CC module/event/subsystems/vmd/vmd.o 00:03:58.927 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:58.927 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:59.185 LIB libspdk_event_keyring.a 00:03:59.185 LIB libspdk_event_sock.a 00:03:59.185 LIB libspdk_event_vhost_blk.a 00:03:59.185 LIB libspdk_event_scheduler.a 00:03:59.185 LIB libspdk_event_vfu_tgt.a 00:03:59.185 LIB libspdk_event_vmd.a 00:03:59.185 SO libspdk_event_keyring.so.1.0 00:03:59.185 LIB libspdk_event_iobuf.a 00:03:59.185 SO libspdk_event_sock.so.5.0 00:03:59.185 SO libspdk_event_vhost_blk.so.3.0 00:03:59.185 SO libspdk_event_vfu_tgt.so.3.0 00:03:59.185 SO libspdk_event_scheduler.so.4.0 00:03:59.185 SO libspdk_event_vmd.so.6.0 00:03:59.185 SO libspdk_event_iobuf.so.3.0 00:03:59.185 SYMLINK libspdk_event_keyring.so 00:03:59.185 SYMLINK libspdk_event_sock.so 00:03:59.185 SYMLINK libspdk_event_vhost_blk.so 00:03:59.185 SYMLINK libspdk_event_vfu_tgt.so 00:03:59.185 SYMLINK libspdk_event_scheduler.so 00:03:59.185 SYMLINK libspdk_event_vmd.so 00:03:59.185 SYMLINK libspdk_event_iobuf.so 00:03:59.444 CC module/event/subsystems/accel/accel.o 00:03:59.444 LIB libspdk_event_accel.a 00:03:59.444 SO libspdk_event_accel.so.6.0 00:03:59.702 SYMLINK libspdk_event_accel.so 00:03:59.702 CC module/event/subsystems/bdev/bdev.o 00:03:59.960 LIB libspdk_event_bdev.a 00:03:59.960 SO libspdk_event_bdev.so.6.0 00:03:59.960 SYMLINK libspdk_event_bdev.so 00:04:00.218 CC module/event/subsystems/scsi/scsi.o 00:04:00.218 CC module/event/subsystems/nbd/nbd.o 00:04:00.218 CC module/event/subsystems/ublk/ublk.o 00:04:00.218 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:00.218 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:00.218 LIB libspdk_event_ublk.a 00:04:00.218 LIB libspdk_event_nbd.a 00:04:00.475 LIB libspdk_event_scsi.a 00:04:00.475 SO libspdk_event_ublk.so.3.0 00:04:00.475 SO libspdk_event_nbd.so.6.0 00:04:00.475 SO libspdk_event_scsi.so.6.0 00:04:00.475 SYMLINK libspdk_event_ublk.so 00:04:00.475 SYMLINK libspdk_event_nbd.so 00:04:00.475 SYMLINK libspdk_event_scsi.so 00:04:00.475 LIB libspdk_event_nvmf.a 00:04:00.475 SO libspdk_event_nvmf.so.6.0 00:04:00.475 SYMLINK libspdk_event_nvmf.so 00:04:00.475 CC module/event/subsystems/iscsi/iscsi.o 00:04:00.475 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:00.733 LIB libspdk_event_vhost_scsi.a 00:04:00.733 LIB libspdk_event_iscsi.a 00:04:00.733 SO libspdk_event_vhost_scsi.so.3.0 00:04:00.733 SO libspdk_event_iscsi.so.6.0 00:04:00.733 SYMLINK libspdk_event_vhost_scsi.so 00:04:00.733 SYMLINK libspdk_event_iscsi.so 00:04:00.990 SO libspdk.so.6.0 00:04:00.990 SYMLINK libspdk.so 00:04:01.251 CXX app/trace/trace.o 00:04:01.251 CC app/trace_record/trace_record.o 00:04:01.251 CC app/spdk_lspci/spdk_lspci.o 00:04:01.251 CC app/spdk_top/spdk_top.o 00:04:01.251 CC app/spdk_nvme_perf/perf.o 00:04:01.251 CC app/spdk_nvme_identify/identify.o 00:04:01.251 CC test/rpc_client/rpc_client_test.o 00:04:01.251 TEST_HEADER include/spdk/accel.h 00:04:01.251 TEST_HEADER include/spdk/accel_module.h 00:04:01.251 TEST_HEADER include/spdk/assert.h 00:04:01.251 CC app/spdk_nvme_discover/discovery_aer.o 00:04:01.251 TEST_HEADER include/spdk/barrier.h 00:04:01.251 TEST_HEADER include/spdk/base64.h 00:04:01.251 TEST_HEADER include/spdk/bdev.h 00:04:01.251 TEST_HEADER include/spdk/bdev_module.h 00:04:01.251 TEST_HEADER include/spdk/bdev_zone.h 00:04:01.251 TEST_HEADER include/spdk/bit_array.h 00:04:01.251 TEST_HEADER include/spdk/bit_pool.h 00:04:01.251 TEST_HEADER include/spdk/blob_bdev.h 00:04:01.251 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:01.251 TEST_HEADER include/spdk/blobfs.h 00:04:01.251 TEST_HEADER include/spdk/blob.h 00:04:01.251 TEST_HEADER include/spdk/conf.h 00:04:01.251 TEST_HEADER include/spdk/config.h 00:04:01.251 TEST_HEADER include/spdk/cpuset.h 00:04:01.251 TEST_HEADER include/spdk/crc16.h 00:04:01.251 TEST_HEADER include/spdk/crc32.h 00:04:01.251 TEST_HEADER include/spdk/crc64.h 00:04:01.251 TEST_HEADER include/spdk/dif.h 00:04:01.251 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:01.251 CC app/spdk_dd/spdk_dd.o 00:04:01.251 TEST_HEADER include/spdk/dma.h 00:04:01.251 TEST_HEADER include/spdk/endian.h 00:04:01.251 TEST_HEADER include/spdk/env_dpdk.h 00:04:01.251 CC app/nvmf_tgt/nvmf_main.o 00:04:01.251 TEST_HEADER include/spdk/env.h 00:04:01.251 TEST_HEADER include/spdk/event.h 00:04:01.251 CC app/iscsi_tgt/iscsi_tgt.o 00:04:01.251 TEST_HEADER include/spdk/fd_group.h 00:04:01.251 CC app/vhost/vhost.o 00:04:01.251 TEST_HEADER include/spdk/fd.h 00:04:01.251 TEST_HEADER include/spdk/file.h 00:04:01.251 TEST_HEADER include/spdk/ftl.h 00:04:01.251 TEST_HEADER include/spdk/gpt_spec.h 00:04:01.251 TEST_HEADER include/spdk/hexlify.h 00:04:01.251 TEST_HEADER include/spdk/histogram_data.h 00:04:01.251 TEST_HEADER include/spdk/idxd.h 00:04:01.251 TEST_HEADER include/spdk/idxd_spec.h 00:04:01.251 TEST_HEADER include/spdk/init.h 00:04:01.251 CC examples/ioat/verify/verify.o 00:04:01.251 TEST_HEADER include/spdk/ioat.h 00:04:01.251 CC app/spdk_tgt/spdk_tgt.o 00:04:01.251 TEST_HEADER include/spdk/ioat_spec.h 00:04:01.251 TEST_HEADER include/spdk/iscsi_spec.h 00:04:01.251 TEST_HEADER include/spdk/json.h 00:04:01.251 CC test/thread/poller_perf/poller_perf.o 00:04:01.251 TEST_HEADER include/spdk/jsonrpc.h 00:04:01.251 CC test/event/reactor_perf/reactor_perf.o 00:04:01.251 CC app/fio/nvme/fio_plugin.o 00:04:01.251 CC test/event/reactor/reactor.o 00:04:01.251 TEST_HEADER include/spdk/keyring.h 00:04:01.251 CC test/nvme/aer/aer.o 00:04:01.251 TEST_HEADER include/spdk/keyring_module.h 00:04:01.251 CC test/app/jsoncat/jsoncat.o 00:04:01.251 CC examples/ioat/perf/perf.o 00:04:01.251 TEST_HEADER include/spdk/likely.h 00:04:01.251 TEST_HEADER include/spdk/log.h 00:04:01.251 CC test/nvme/reset/reset.o 00:04:01.251 CC examples/nvme/hello_world/hello_world.o 00:04:01.251 CC test/app/histogram_perf/histogram_perf.o 00:04:01.251 CC examples/util/zipf/zipf.o 00:04:01.251 CC test/event/event_perf/event_perf.o 00:04:01.251 TEST_HEADER include/spdk/lvol.h 00:04:01.251 TEST_HEADER include/spdk/memory.h 00:04:01.251 TEST_HEADER include/spdk/mmio.h 00:04:01.251 CC examples/sock/hello_world/hello_sock.o 00:04:01.251 TEST_HEADER include/spdk/nbd.h 00:04:01.251 CC examples/vmd/lsvmd/lsvmd.o 00:04:01.251 TEST_HEADER include/spdk/notify.h 00:04:01.251 CC examples/idxd/perf/perf.o 00:04:01.251 CC test/event/app_repeat/app_repeat.o 00:04:01.251 CC examples/accel/perf/accel_perf.o 00:04:01.251 TEST_HEADER include/spdk/nvme.h 00:04:01.251 TEST_HEADER include/spdk/nvme_intel.h 00:04:01.251 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:01.251 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:01.251 TEST_HEADER include/spdk/nvme_spec.h 00:04:01.251 TEST_HEADER include/spdk/nvme_zns.h 00:04:01.251 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:01.251 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:01.251 CC test/dma/test_dma/test_dma.o 00:04:01.251 TEST_HEADER include/spdk/nvmf.h 00:04:01.251 CC test/blobfs/mkfs/mkfs.o 00:04:01.251 TEST_HEADER include/spdk/nvmf_spec.h 00:04:01.514 TEST_HEADER include/spdk/nvmf_transport.h 00:04:01.514 CC test/bdev/bdevio/bdevio.o 00:04:01.514 CC examples/thread/thread/thread_ex.o 00:04:01.514 TEST_HEADER include/spdk/opal.h 00:04:01.514 TEST_HEADER include/spdk/opal_spec.h 00:04:01.514 CC examples/bdev/bdevperf/bdevperf.o 00:04:01.514 TEST_HEADER include/spdk/pci_ids.h 00:04:01.514 CC test/event/scheduler/scheduler.o 00:04:01.514 CC examples/blob/hello_world/hello_blob.o 00:04:01.514 TEST_HEADER include/spdk/pipe.h 00:04:01.514 TEST_HEADER include/spdk/queue.h 00:04:01.515 CC examples/bdev/hello_world/hello_bdev.o 00:04:01.515 TEST_HEADER include/spdk/reduce.h 00:04:01.515 CC test/app/bdev_svc/bdev_svc.o 00:04:01.515 TEST_HEADER include/spdk/rpc.h 00:04:01.515 TEST_HEADER include/spdk/scheduler.h 00:04:01.515 CC test/accel/dif/dif.o 00:04:01.515 CC app/fio/bdev/fio_plugin.o 00:04:01.515 TEST_HEADER include/spdk/scsi.h 00:04:01.515 TEST_HEADER include/spdk/scsi_spec.h 00:04:01.515 TEST_HEADER include/spdk/sock.h 00:04:01.515 TEST_HEADER include/spdk/stdinc.h 00:04:01.515 TEST_HEADER include/spdk/string.h 00:04:01.515 TEST_HEADER include/spdk/thread.h 00:04:01.515 CC examples/nvmf/nvmf/nvmf.o 00:04:01.515 TEST_HEADER include/spdk/trace.h 00:04:01.515 TEST_HEADER include/spdk/trace_parser.h 00:04:01.515 TEST_HEADER include/spdk/tree.h 00:04:01.515 TEST_HEADER include/spdk/ublk.h 00:04:01.515 TEST_HEADER include/spdk/util.h 00:04:01.515 TEST_HEADER include/spdk/uuid.h 00:04:01.515 TEST_HEADER include/spdk/version.h 00:04:01.515 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:01.515 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:01.515 LINK spdk_lspci 00:04:01.515 TEST_HEADER include/spdk/vhost.h 00:04:01.515 TEST_HEADER include/spdk/vmd.h 00:04:01.515 TEST_HEADER include/spdk/xor.h 00:04:01.515 CC test/env/mem_callbacks/mem_callbacks.o 00:04:01.515 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:01.515 TEST_HEADER include/spdk/zipf.h 00:04:01.515 CXX test/cpp_headers/accel.o 00:04:01.515 CC test/lvol/esnap/esnap.o 00:04:01.515 LINK rpc_client_test 00:04:01.515 LINK spdk_nvme_discover 00:04:01.515 LINK reactor 00:04:01.515 LINK reactor_perf 00:04:01.515 LINK poller_perf 00:04:01.515 LINK lsvmd 00:04:01.515 LINK nvmf_tgt 00:04:01.515 LINK jsoncat 00:04:01.515 LINK interrupt_tgt 00:04:01.515 LINK histogram_perf 00:04:01.781 LINK zipf 00:04:01.781 LINK vhost 00:04:01.781 LINK event_perf 00:04:01.781 LINK app_repeat 00:04:01.781 LINK spdk_trace_record 00:04:01.781 LINK iscsi_tgt 00:04:01.781 LINK spdk_tgt 00:04:01.781 LINK verify 00:04:01.781 LINK ioat_perf 00:04:01.781 LINK hello_world 00:04:01.781 LINK bdev_svc 00:04:01.781 LINK mkfs 00:04:01.781 LINK hello_sock 00:04:01.781 LINK scheduler 00:04:01.781 CXX test/cpp_headers/accel_module.o 00:04:01.781 LINK reset 00:04:01.781 LINK hello_blob 00:04:01.781 LINK aer 00:04:01.781 LINK thread 00:04:01.781 LINK hello_bdev 00:04:01.781 CXX test/cpp_headers/assert.o 00:04:02.043 CXX test/cpp_headers/barrier.o 00:04:02.043 CXX test/cpp_headers/base64.o 00:04:02.043 LINK spdk_dd 00:04:02.043 CXX test/cpp_headers/bdev.o 00:04:02.043 LINK idxd_perf 00:04:02.043 CC examples/blob/cli/blobcli.o 00:04:02.043 LINK nvmf 00:04:02.043 CC examples/nvme/reconnect/reconnect.o 00:04:02.043 LINK spdk_trace 00:04:02.043 CXX test/cpp_headers/bdev_module.o 00:04:02.043 CC test/app/stub/stub.o 00:04:02.043 CC test/env/vtophys/vtophys.o 00:04:02.043 CC examples/vmd/led/led.o 00:04:02.043 CC test/nvme/sgl/sgl.o 00:04:02.043 LINK test_dma 00:04:02.043 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:02.043 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:02.043 LINK dif 00:04:02.314 LINK bdevio 00:04:02.314 CC test/nvme/e2edp/nvme_dp.o 00:04:02.314 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:02.314 LINK accel_perf 00:04:02.314 CC examples/nvme/arbitration/arbitration.o 00:04:02.314 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:02.314 CXX test/cpp_headers/bdev_zone.o 00:04:02.314 CC test/nvme/overhead/overhead.o 00:04:02.314 CXX test/cpp_headers/bit_array.o 00:04:02.314 CXX test/cpp_headers/bit_pool.o 00:04:02.314 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:02.314 LINK nvme_fuzz 00:04:02.314 CC test/env/memory/memory_ut.o 00:04:02.314 CXX test/cpp_headers/blob_bdev.o 00:04:02.314 CC examples/nvme/hotplug/hotplug.o 00:04:02.314 CC test/env/pci/pci_ut.o 00:04:02.314 CXX test/cpp_headers/blobfs_bdev.o 00:04:02.314 CC test/nvme/err_injection/err_injection.o 00:04:02.314 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:02.314 LINK spdk_bdev 00:04:02.314 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:02.314 CC test/nvme/startup/startup.o 00:04:02.314 CC examples/nvme/abort/abort.o 00:04:02.314 CXX test/cpp_headers/blobfs.o 00:04:02.314 LINK spdk_nvme 00:04:02.314 LINK vtophys 00:04:02.314 LINK led 00:04:02.314 CC test/nvme/reserve/reserve.o 00:04:02.574 CC test/nvme/simple_copy/simple_copy.o 00:04:02.574 LINK stub 00:04:02.574 CC test/nvme/connect_stress/connect_stress.o 00:04:02.574 CXX test/cpp_headers/blob.o 00:04:02.574 CXX test/cpp_headers/conf.o 00:04:02.574 CC test/nvme/boot_partition/boot_partition.o 00:04:02.574 CC test/nvme/compliance/nvme_compliance.o 00:04:02.574 CXX test/cpp_headers/config.o 00:04:02.574 LINK env_dpdk_post_init 00:04:02.574 CXX test/cpp_headers/cpuset.o 00:04:02.574 CXX test/cpp_headers/crc16.o 00:04:02.574 CC test/nvme/fused_ordering/fused_ordering.o 00:04:02.574 CXX test/cpp_headers/crc32.o 00:04:02.574 CXX test/cpp_headers/crc64.o 00:04:02.574 CXX test/cpp_headers/dif.o 00:04:02.574 CXX test/cpp_headers/dma.o 00:04:02.574 CXX test/cpp_headers/endian.o 00:04:02.574 LINK sgl 00:04:02.574 LINK mem_callbacks 00:04:02.574 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:02.837 LINK reconnect 00:04:02.837 CXX test/cpp_headers/env_dpdk.o 00:04:02.837 CC test/nvme/cuse/cuse.o 00:04:02.837 LINK err_injection 00:04:02.837 LINK pmr_persistence 00:04:02.837 CXX test/cpp_headers/env.o 00:04:02.837 CC test/nvme/fdp/fdp.o 00:04:02.838 CXX test/cpp_headers/event.o 00:04:02.838 LINK startup 00:04:02.838 LINK cmb_copy 00:04:02.838 LINK spdk_nvme_perf 00:04:02.838 CXX test/cpp_headers/fd_group.o 00:04:02.838 LINK nvme_dp 00:04:02.838 CXX test/cpp_headers/fd.o 00:04:02.838 LINK spdk_nvme_identify 00:04:02.838 CXX test/cpp_headers/file.o 00:04:02.838 LINK hotplug 00:04:02.838 LINK overhead 00:04:02.838 LINK reserve 00:04:02.838 LINK connect_stress 00:04:02.838 LINK arbitration 00:04:02.838 CXX test/cpp_headers/ftl.o 00:04:02.838 LINK bdevperf 00:04:02.838 CXX test/cpp_headers/gpt_spec.o 00:04:02.838 LINK boot_partition 00:04:02.838 CXX test/cpp_headers/hexlify.o 00:04:02.838 LINK spdk_top 00:04:02.838 CXX test/cpp_headers/histogram_data.o 00:04:02.838 LINK simple_copy 00:04:02.838 LINK blobcli 00:04:02.838 CXX test/cpp_headers/idxd.o 00:04:02.838 CXX test/cpp_headers/idxd_spec.o 00:04:03.110 CXX test/cpp_headers/init.o 00:04:03.110 CXX test/cpp_headers/ioat.o 00:04:03.110 CXX test/cpp_headers/ioat_spec.o 00:04:03.110 CXX test/cpp_headers/iscsi_spec.o 00:04:03.110 CXX test/cpp_headers/json.o 00:04:03.110 CXX test/cpp_headers/jsonrpc.o 00:04:03.110 LINK fused_ordering 00:04:03.110 CXX test/cpp_headers/keyring.o 00:04:03.111 LINK vhost_fuzz 00:04:03.111 LINK pci_ut 00:04:03.111 CXX test/cpp_headers/keyring_module.o 00:04:03.111 CXX test/cpp_headers/likely.o 00:04:03.111 CXX test/cpp_headers/log.o 00:04:03.111 LINK nvme_manage 00:04:03.111 CXX test/cpp_headers/lvol.o 00:04:03.111 CXX test/cpp_headers/memory.o 00:04:03.111 CXX test/cpp_headers/mmio.o 00:04:03.111 CXX test/cpp_headers/nbd.o 00:04:03.111 LINK doorbell_aers 00:04:03.111 LINK abort 00:04:03.111 CXX test/cpp_headers/notify.o 00:04:03.111 CXX test/cpp_headers/nvme.o 00:04:03.111 CXX test/cpp_headers/nvme_intel.o 00:04:03.111 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.111 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.111 CXX test/cpp_headers/nvme_spec.o 00:04:03.111 CXX test/cpp_headers/nvme_zns.o 00:04:03.111 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.111 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.111 CXX test/cpp_headers/nvmf.o 00:04:03.111 CXX test/cpp_headers/nvmf_spec.o 00:04:03.111 CXX test/cpp_headers/nvmf_transport.o 00:04:03.111 LINK nvme_compliance 00:04:03.111 CXX test/cpp_headers/opal.o 00:04:03.370 CXX test/cpp_headers/opal_spec.o 00:04:03.370 CXX test/cpp_headers/pci_ids.o 00:04:03.370 CXX test/cpp_headers/pipe.o 00:04:03.370 CXX test/cpp_headers/queue.o 00:04:03.370 CXX test/cpp_headers/reduce.o 00:04:03.370 CXX test/cpp_headers/rpc.o 00:04:03.370 CXX test/cpp_headers/scheduler.o 00:04:03.370 CXX test/cpp_headers/scsi.o 00:04:03.370 CXX test/cpp_headers/scsi_spec.o 00:04:03.370 CXX test/cpp_headers/sock.o 00:04:03.370 CXX test/cpp_headers/stdinc.o 00:04:03.370 CXX test/cpp_headers/string.o 00:04:03.370 CXX test/cpp_headers/thread.o 00:04:03.370 CXX test/cpp_headers/trace.o 00:04:03.370 CXX test/cpp_headers/trace_parser.o 00:04:03.370 CXX test/cpp_headers/tree.o 00:04:03.370 CXX test/cpp_headers/ublk.o 00:04:03.370 LINK fdp 00:04:03.370 CXX test/cpp_headers/util.o 00:04:03.370 CXX test/cpp_headers/uuid.o 00:04:03.370 CXX test/cpp_headers/version.o 00:04:03.370 CXX test/cpp_headers/vfio_user_pci.o 00:04:03.370 CXX test/cpp_headers/vfio_user_spec.o 00:04:03.370 CXX test/cpp_headers/vhost.o 00:04:03.370 CXX test/cpp_headers/xor.o 00:04:03.370 CXX test/cpp_headers/vmd.o 00:04:03.370 CXX test/cpp_headers/zipf.o 00:04:03.935 LINK memory_ut 00:04:04.193 LINK cuse 00:04:04.451 LINK iscsi_fuzz 00:04:06.981 LINK esnap 00:04:07.240 00:04:07.240 real 0m40.330s 00:04:07.240 user 7m30.943s 00:04:07.240 sys 1m47.185s 00:04:07.240 02:44:57 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:07.240 02:44:57 make -- common/autotest_common.sh@10 -- $ set +x 00:04:07.240 ************************************ 00:04:07.240 END TEST make 00:04:07.240 ************************************ 00:04:07.240 02:44:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:07.240 02:44:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:07.240 02:44:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:07.240 02:44:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.240 02:44:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:07.240 02:44:57 -- pm/common@44 -- $ pid=115794 00:04:07.240 02:44:57 -- pm/common@50 -- $ kill -TERM 115794 00:04:07.240 02:44:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.240 02:44:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:07.240 02:44:57 -- pm/common@44 -- $ pid=115796 00:04:07.240 02:44:57 -- pm/common@50 -- $ kill -TERM 115796 00:04:07.240 02:44:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.240 02:44:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:07.240 02:44:57 -- pm/common@44 -- $ pid=115798 00:04:07.240 02:44:57 -- pm/common@50 -- $ kill -TERM 115798 00:04:07.240 02:44:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.240 02:44:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:07.240 02:44:57 -- pm/common@44 -- $ pid=115832 00:04:07.240 02:44:57 -- pm/common@50 -- $ sudo -E kill -TERM 115832 00:04:07.240 02:44:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.240 02:44:57 -- nvmf/common.sh@7 -- # uname -s 00:04:07.240 02:44:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.240 02:44:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.240 02:44:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.240 02:44:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.240 02:44:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.240 02:44:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.240 02:44:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.240 02:44:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.240 02:44:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.240 02:44:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.240 02:44:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:07.240 02:44:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:07.240 02:44:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.240 02:44:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.240 02:44:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:07.240 02:44:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.240 02:44:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:07.240 02:44:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.240 02:44:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.240 02:44:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.241 02:44:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.241 02:44:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.241 02:44:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.241 02:44:57 -- paths/export.sh@5 -- # export PATH 00:04:07.241 02:44:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.241 02:44:57 -- nvmf/common.sh@47 -- # : 0 00:04:07.241 02:44:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:07.241 02:44:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:07.241 02:44:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.241 02:44:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.241 02:44:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.241 02:44:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:07.241 02:44:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:07.241 02:44:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:07.241 02:44:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.241 02:44:58 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.241 02:44:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.241 02:44:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.241 02:44:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.241 02:44:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.241 02:44:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.241 02:44:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.241 02:44:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.241 02:44:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.241 02:44:58 -- spdk/autotest.sh@48 -- # udevadm_pid=192468 00:04:07.241 02:44:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.241 02:44:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:07.241 02:44:58 -- pm/common@17 -- # local monitor 00:04:07.241 02:44:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.241 02:44:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.241 02:44:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.241 02:44:58 -- pm/common@21 -- # date +%s 00:04:07.241 02:44:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.241 02:44:58 -- pm/common@21 -- # date +%s 00:04:07.241 02:44:58 -- pm/common@25 -- # sleep 1 00:04:07.241 02:44:58 -- pm/common@21 -- # date +%s 00:04:07.241 02:44:58 -- pm/common@21 -- # date +%s 00:04:07.241 02:44:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561098 00:04:07.241 02:44:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561098 00:04:07.241 02:44:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561098 00:04:07.241 02:44:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561098 00:04:07.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715561098_collect-vmstat.pm.log 00:04:07.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715561098_collect-cpu-load.pm.log 00:04:07.241 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715561098_collect-cpu-temp.pm.log 00:04:07.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715561098_collect-bmc-pm.bmc.pm.log 00:04:08.438 02:44:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:08.438 02:44:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:08.438 02:44:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:08.438 02:44:59 -- common/autotest_common.sh@10 -- # set +x 00:04:08.438 02:44:59 -- spdk/autotest.sh@59 -- # create_test_list 00:04:08.438 02:44:59 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:08.438 02:44:59 -- common/autotest_common.sh@10 -- # set +x 00:04:08.438 02:44:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:08.438 02:44:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.438 02:44:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.438 02:44:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:08.438 02:44:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.438 02:44:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:08.438 02:44:59 -- common/autotest_common.sh@1451 -- # uname 00:04:08.438 02:44:59 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:08.438 02:44:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:08.438 02:44:59 -- common/autotest_common.sh@1471 -- # uname 00:04:08.438 02:44:59 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:08.438 02:44:59 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:08.438 02:44:59 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:08.438 02:44:59 -- spdk/autotest.sh@72 -- # hash lcov 00:04:08.438 02:44:59 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:08.438 02:44:59 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:08.438 --rc lcov_branch_coverage=1 00:04:08.438 --rc lcov_function_coverage=1 00:04:08.438 --rc genhtml_branch_coverage=1 00:04:08.438 --rc genhtml_function_coverage=1 00:04:08.438 --rc genhtml_legend=1 00:04:08.438 --rc geninfo_all_blocks=1 00:04:08.438 ' 00:04:08.438 02:44:59 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:08.438 --rc lcov_branch_coverage=1 00:04:08.438 --rc lcov_function_coverage=1 00:04:08.438 --rc genhtml_branch_coverage=1 00:04:08.438 --rc genhtml_function_coverage=1 00:04:08.438 --rc genhtml_legend=1 00:04:08.438 --rc geninfo_all_blocks=1 00:04:08.438 ' 00:04:08.438 02:44:59 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:08.438 --rc lcov_branch_coverage=1 00:04:08.438 --rc lcov_function_coverage=1 00:04:08.438 --rc genhtml_branch_coverage=1 00:04:08.438 --rc genhtml_function_coverage=1 00:04:08.438 --rc genhtml_legend=1 00:04:08.438 --rc geninfo_all_blocks=1 00:04:08.438 --no-external' 00:04:08.438 02:44:59 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:08.438 --rc lcov_branch_coverage=1 00:04:08.438 --rc lcov_function_coverage=1 00:04:08.438 --rc genhtml_branch_coverage=1 00:04:08.438 --rc genhtml_function_coverage=1 00:04:08.438 --rc genhtml_legend=1 00:04:08.438 --rc geninfo_all_blocks=1 00:04:08.438 --no-external' 00:04:08.438 02:44:59 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:08.438 lcov: LCOV version 1.14 00:04:08.438 02:44:59 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:23.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:23.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:23.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:23.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:23.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:23.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:23.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:23.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:41.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:41.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:41.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:41.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:41.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:41.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:41.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:41.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:41.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:41.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:41.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:41.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:42.416 02:45:33 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:42.416 02:45:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:42.416 02:45:33 -- common/autotest_common.sh@10 -- # set +x 00:04:42.416 02:45:33 -- spdk/autotest.sh@91 -- # rm -f 00:04:42.416 02:45:33 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.351 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:43.351 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:43.351 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:43.351 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:43.351 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:43.351 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:43.608 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:43.608 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:43.608 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:43.608 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:43.608 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:43.608 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:43.608 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:43.608 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:43.608 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:43.608 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:43.608 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:43.608 02:45:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:43.608 02:45:34 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:43.608 02:45:34 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:43.608 02:45:34 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:43.608 02:45:34 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:43.609 02:45:34 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:43.609 02:45:34 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:43.609 02:45:34 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.609 02:45:34 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:43.609 02:45:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:43.609 02:45:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.609 02:45:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:43.609 02:45:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:43.609 02:45:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:43.609 02:45:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:43.867 No valid GPT data, bailing 00:04:43.867 02:45:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:43.867 02:45:34 -- scripts/common.sh@391 -- # pt= 00:04:43.867 02:45:34 -- scripts/common.sh@392 -- # return 1 00:04:43.867 02:45:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:43.867 1+0 records in 00:04:43.867 1+0 records out 00:04:43.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0026105 s, 402 MB/s 00:04:43.867 02:45:34 -- spdk/autotest.sh@118 -- # sync 00:04:43.867 02:45:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:43.867 02:45:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:43.867 02:45:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:45.765 02:45:36 -- spdk/autotest.sh@124 -- # uname -s 00:04:45.765 02:45:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:45.765 02:45:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:45.765 02:45:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.765 02:45:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.765 02:45:36 -- common/autotest_common.sh@10 -- # set +x 00:04:45.765 ************************************ 00:04:45.765 START TEST setup.sh 00:04:45.765 ************************************ 00:04:45.765 02:45:36 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:45.765 * Looking for test storage... 00:04:45.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:45.765 02:45:36 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:45.765 02:45:36 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:45.765 02:45:36 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:45.765 02:45:36 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.765 02:45:36 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.765 02:45:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.765 ************************************ 00:04:45.765 START TEST acl 00:04:45.765 ************************************ 00:04:45.765 02:45:36 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:45.765 * Looking for test storage... 00:04:45.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:45.765 02:45:36 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:45.765 02:45:36 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:45.765 02:45:36 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:45.765 02:45:36 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:45.765 02:45:36 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:45.765 02:45:36 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:45.765 02:45:36 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:45.765 02:45:36 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:45.765 02:45:36 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:45.765 02:45:36 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:45.765 02:45:36 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:45.765 02:45:36 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:45.765 02:45:36 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:45.765 02:45:36 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:45.765 02:45:36 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.765 02:45:36 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.138 02:45:37 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:47.138 02:45:37 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:47.138 02:45:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:47.138 02:45:37 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:47.138 02:45:37 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.138 02:45:37 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:48.514 Hugepages 00:04:48.514 node hugesize free / total 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 00:04:48.514 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:48.514 02:45:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:48.514 02:45:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:48.514 02:45:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:48.514 02:45:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:48.514 02:45:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:48.514 02:45:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.514 02:45:39 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:48.514 02:45:39 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:48.514 02:45:39 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.514 02:45:39 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.514 02:45:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:48.514 ************************************ 00:04:48.514 START TEST denied 00:04:48.514 ************************************ 00:04:48.514 02:45:39 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:48.514 02:45:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:48.514 02:45:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:48.514 02:45:39 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:48.514 02:45:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.514 02:45:39 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.449 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:49.449 02:45:40 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:49.449 02:45:40 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:49.449 02:45:40 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:49.449 02:45:40 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:49.449 02:45:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:49.707 02:45:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:49.707 02:45:40 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:49.707 02:45:40 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:49.707 02:45:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.707 02:45:40 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.241 00:04:52.241 real 0m3.446s 00:04:52.241 user 0m1.019s 00:04:52.241 sys 0m1.565s 00:04:52.241 02:45:42 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.241 02:45:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:52.241 ************************************ 00:04:52.241 END TEST denied 00:04:52.241 ************************************ 00:04:52.241 02:45:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:52.241 02:45:42 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.241 02:45:42 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.241 02:45:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:52.241 ************************************ 00:04:52.241 START TEST allowed 00:04:52.241 ************************************ 00:04:52.241 02:45:42 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:52.241 02:45:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:52.241 02:45:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:52.241 02:45:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:52.241 02:45:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.241 02:45:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.144 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.144 02:45:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:54.144 02:45:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:54.144 02:45:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:54.144 02:45:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.144 02:45:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.049 00:04:56.049 real 0m3.803s 00:04:56.049 user 0m1.013s 00:04:56.049 sys 0m1.675s 00:04:56.049 02:45:46 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.049 02:45:46 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:56.049 ************************************ 00:04:56.049 END TEST allowed 00:04:56.049 ************************************ 00:04:56.049 00:04:56.049 real 0m10.018s 00:04:56.049 user 0m3.122s 00:04:56.049 sys 0m4.993s 00:04:56.049 02:45:46 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.049 02:45:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:56.049 ************************************ 00:04:56.049 END TEST acl 00:04:56.049 ************************************ 00:04:56.049 02:45:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:56.049 02:45:46 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.049 02:45:46 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.049 02:45:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:56.049 ************************************ 00:04:56.049 START TEST hugepages 00:04:56.049 ************************************ 00:04:56.049 02:45:46 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:56.049 * Looking for test storage... 00:04:56.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:56.049 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:56.049 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:56.049 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:56.049 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:56.049 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:56.049 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 37111076 kB' 'MemAvailable: 41761412 kB' 'Buffers: 3728 kB' 'Cached: 16777620 kB' 'SwapCached: 0 kB' 'Active: 12818128 kB' 'Inactive: 4455520 kB' 'Active(anon): 12252296 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495704 kB' 'Mapped: 184912 kB' 'Shmem: 11759996 kB' 'KReclaimable: 245216 kB' 'Slab: 640384 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 395168 kB' 'KernelStack: 13056 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 13405368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197088 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.050 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.051 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:56.052 02:45:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:56.052 02:45:46 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.052 02:45:46 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.052 02:45:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.052 ************************************ 00:04:56.052 START TEST default_setup 00:04:56.052 ************************************ 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.052 02:45:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.026 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:57.026 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:57.026 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:57.026 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:57.026 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:57.026 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:57.026 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:57.026 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:57.026 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:57.026 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:57.026 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:57.026 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:57.026 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:57.026 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:57.026 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:57.026 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:57.966 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39216916 kB' 'MemAvailable: 43867252 kB' 'Buffers: 3728 kB' 'Cached: 16777708 kB' 'SwapCached: 0 kB' 'Active: 12834184 kB' 'Inactive: 4455520 kB' 'Active(anon): 12268352 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511500 kB' 'Mapped: 185004 kB' 'Shmem: 11760084 kB' 'KReclaimable: 245216 kB' 'Slab: 640176 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394960 kB' 'KernelStack: 13136 kB' 'PageTables: 9244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13425400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.966 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.967 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39218932 kB' 'MemAvailable: 43869268 kB' 'Buffers: 3728 kB' 'Cached: 16777708 kB' 'SwapCached: 0 kB' 'Active: 12837668 kB' 'Inactive: 4455520 kB' 'Active(anon): 12271836 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515076 kB' 'Mapped: 185004 kB' 'Shmem: 11760084 kB' 'KReclaimable: 245216 kB' 'Slab: 640176 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394960 kB' 'KernelStack: 12976 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13428464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197120 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.968 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.232 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39219168 kB' 'MemAvailable: 43869504 kB' 'Buffers: 3728 kB' 'Cached: 16777708 kB' 'SwapCached: 0 kB' 'Active: 12831164 kB' 'Inactive: 4455520 kB' 'Active(anon): 12265332 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508084 kB' 'Mapped: 184924 kB' 'Shmem: 11760084 kB' 'KReclaimable: 245216 kB' 'Slab: 640248 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 395032 kB' 'KernelStack: 12944 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.233 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.234 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.235 nr_hugepages=1024 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.235 resv_hugepages=0 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.235 surplus_hugepages=0 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.235 anon_hugepages=0 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39218412 kB' 'MemAvailable: 43868748 kB' 'Buffers: 3728 kB' 'Cached: 16777756 kB' 'SwapCached: 0 kB' 'Active: 12831140 kB' 'Inactive: 4455520 kB' 'Active(anon): 12265308 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508544 kB' 'Mapped: 184508 kB' 'Shmem: 11760132 kB' 'KReclaimable: 245216 kB' 'Slab: 640224 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 395008 kB' 'KernelStack: 12896 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197116 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.235 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.236 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19022060 kB' 'MemUsed: 13807824 kB' 'SwapCached: 0 kB' 'Active: 7374844 kB' 'Inactive: 3341192 kB' 'Active(anon): 7222296 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3341192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10578192 kB' 'Mapped: 46780 kB' 'AnonPages: 141024 kB' 'Shmem: 7084452 kB' 'KernelStack: 6472 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129364 kB' 'Slab: 353292 kB' 'SReclaimable: 129364 kB' 'SUnreclaim: 223928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.237 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.238 node0=1024 expecting 1024 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.238 00:04:58.238 real 0m2.330s 00:04:58.238 user 0m0.607s 00:04:58.238 sys 0m0.728s 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.238 02:45:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:58.238 ************************************ 00:04:58.238 END TEST default_setup 00:04:58.238 ************************************ 00:04:58.238 02:45:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:58.238 02:45:48 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.238 02:45:48 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.238 02:45:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.238 ************************************ 00:04:58.238 START TEST per_node_1G_alloc 00:04:58.238 ************************************ 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.239 02:45:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.174 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.174 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.174 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.174 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.174 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.174 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.174 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.174 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.174 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.174 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.174 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.174 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.174 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.174 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.174 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.174 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.174 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39198964 kB' 'MemAvailable: 43849300 kB' 'Buffers: 3728 kB' 'Cached: 16777824 kB' 'SwapCached: 0 kB' 'Active: 12832828 kB' 'Inactive: 4455520 kB' 'Active(anon): 12266996 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510492 kB' 'Mapped: 184572 kB' 'Shmem: 11760200 kB' 'KReclaimable: 245216 kB' 'Slab: 640160 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394944 kB' 'KernelStack: 12928 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.440 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.441 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39199636 kB' 'MemAvailable: 43849972 kB' 'Buffers: 3728 kB' 'Cached: 16777824 kB' 'SwapCached: 0 kB' 'Active: 12833712 kB' 'Inactive: 4455520 kB' 'Active(anon): 12267880 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511564 kB' 'Mapped: 184580 kB' 'Shmem: 11760200 kB' 'KReclaimable: 245216 kB' 'Slab: 640148 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394932 kB' 'KernelStack: 12896 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.442 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.443 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39199852 kB' 'MemAvailable: 43850188 kB' 'Buffers: 3728 kB' 'Cached: 16777828 kB' 'SwapCached: 0 kB' 'Active: 12833720 kB' 'Inactive: 4455520 kB' 'Active(anon): 12267888 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511128 kB' 'Mapped: 184580 kB' 'Shmem: 11760204 kB' 'KReclaimable: 245216 kB' 'Slab: 640184 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394968 kB' 'KernelStack: 12880 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.444 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.445 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.446 nr_hugepages=1024 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.446 resv_hugepages=0 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.446 surplus_hugepages=0 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.446 anon_hugepages=0 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39199852 kB' 'MemAvailable: 43850188 kB' 'Buffers: 3728 kB' 'Cached: 16777868 kB' 'SwapCached: 0 kB' 'Active: 12833704 kB' 'Inactive: 4455520 kB' 'Active(anon): 12267872 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511484 kB' 'Mapped: 184520 kB' 'Shmem: 11760244 kB' 'KReclaimable: 245216 kB' 'Slab: 640176 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394960 kB' 'KernelStack: 12896 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.446 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.447 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20067600 kB' 'MemUsed: 12762284 kB' 'SwapCached: 0 kB' 'Active: 7375272 kB' 'Inactive: 3341192 kB' 'Active(anon): 7222724 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3341192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10578284 kB' 'Mapped: 46780 kB' 'AnonPages: 141636 kB' 'Shmem: 7084544 kB' 'KernelStack: 6456 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129364 kB' 'Slab: 353356 kB' 'SReclaimable: 129364 kB' 'SUnreclaim: 223992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.448 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.449 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711848 kB' 'MemFree: 19132252 kB' 'MemUsed: 8579596 kB' 'SwapCached: 0 kB' 'Active: 5458100 kB' 'Inactive: 1114328 kB' 'Active(anon): 5044816 kB' 'Inactive(anon): 0 kB' 'Active(file): 413284 kB' 'Inactive(file): 1114328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203316 kB' 'Mapped: 137740 kB' 'AnonPages: 369512 kB' 'Shmem: 4675704 kB' 'KernelStack: 6424 kB' 'PageTables: 5252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115852 kB' 'Slab: 286820 kB' 'SReclaimable: 115852 kB' 'SUnreclaim: 170968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.450 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:59.451 node0=512 expecting 512 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:59.451 node1=512 expecting 512 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:59.451 00:04:59.451 real 0m1.305s 00:04:59.451 user 0m0.525s 00:04:59.451 sys 0m0.738s 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.451 02:45:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:59.451 ************************************ 00:04:59.451 END TEST per_node_1G_alloc 00:04:59.451 ************************************ 00:04:59.709 02:45:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:59.709 02:45:50 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.709 02:45:50 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.709 02:45:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.709 ************************************ 00:04:59.709 START TEST even_2G_alloc 00:04:59.709 ************************************ 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.709 02:45:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.647 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:00.647 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:00.647 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:00.647 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:00.647 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:00.647 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:00.647 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:00.647 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:00.647 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:00.647 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:00.647 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:00.647 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:00.647 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:00.647 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:00.647 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:00.647 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:00.647 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39190772 kB' 'MemAvailable: 43841108 kB' 'Buffers: 3728 kB' 'Cached: 16777964 kB' 'SwapCached: 0 kB' 'Active: 12831024 kB' 'Inactive: 4455520 kB' 'Active(anon): 12265192 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508012 kB' 'Mapped: 184556 kB' 'Shmem: 11760340 kB' 'KReclaimable: 245216 kB' 'Slab: 639956 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394740 kB' 'KernelStack: 12864 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.647 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.648 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39191920 kB' 'MemAvailable: 43842256 kB' 'Buffers: 3728 kB' 'Cached: 16777968 kB' 'SwapCached: 0 kB' 'Active: 12831360 kB' 'Inactive: 4455520 kB' 'Active(anon): 12265528 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508368 kB' 'Mapped: 184556 kB' 'Shmem: 11760344 kB' 'KReclaimable: 245216 kB' 'Slab: 639940 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394724 kB' 'KernelStack: 12880 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39191668 kB' 'MemAvailable: 43842004 kB' 'Buffers: 3728 kB' 'Cached: 16777988 kB' 'SwapCached: 0 kB' 'Active: 12830800 kB' 'Inactive: 4455520 kB' 'Active(anon): 12264968 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507820 kB' 'Mapped: 184532 kB' 'Shmem: 11760364 kB' 'KReclaimable: 245216 kB' 'Slab: 639952 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394736 kB' 'KernelStack: 12912 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.916 nr_hugepages=1024 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.916 resv_hugepages=0 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.916 surplus_hugepages=0 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.916 anon_hugepages=0 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.916 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39192784 kB' 'MemAvailable: 43843120 kB' 'Buffers: 3728 kB' 'Cached: 16778008 kB' 'SwapCached: 0 kB' 'Active: 12830804 kB' 'Inactive: 4455520 kB' 'Active(anon): 12264972 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507820 kB' 'Mapped: 184532 kB' 'Shmem: 11760384 kB' 'KReclaimable: 245216 kB' 'Slab: 639952 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394736 kB' 'KernelStack: 12912 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13422576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.918 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20071276 kB' 'MemUsed: 12758608 kB' 'SwapCached: 0 kB' 'Active: 7373712 kB' 'Inactive: 3341192 kB' 'Active(anon): 7221164 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3341192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10578360 kB' 'Mapped: 46780 kB' 'AnonPages: 139676 kB' 'Shmem: 7084620 kB' 'KernelStack: 6504 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129364 kB' 'Slab: 353188 kB' 'SReclaimable: 129364 kB' 'SUnreclaim: 223824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711848 kB' 'MemFree: 19120752 kB' 'MemUsed: 8591096 kB' 'SwapCached: 0 kB' 'Active: 5456836 kB' 'Inactive: 1114328 kB' 'Active(anon): 5043552 kB' 'Inactive(anon): 0 kB' 'Active(file): 413284 kB' 'Inactive(file): 1114328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203420 kB' 'Mapped: 137752 kB' 'AnonPages: 367800 kB' 'Shmem: 4675808 kB' 'KernelStack: 6376 kB' 'PageTables: 5048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115852 kB' 'Slab: 286764 kB' 'SReclaimable: 115852 kB' 'SUnreclaim: 170912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.921 node0=512 expecting 512 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.921 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.922 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:00.922 node1=512 expecting 512 00:05:00.922 02:45:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:00.922 00:05:00.922 real 0m1.306s 00:05:00.922 user 0m0.553s 00:05:00.922 sys 0m0.715s 00:05:00.922 02:45:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.922 02:45:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.922 ************************************ 00:05:00.922 END TEST even_2G_alloc 00:05:00.922 ************************************ 00:05:00.922 02:45:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:00.922 02:45:51 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.922 02:45:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.922 02:45:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.922 ************************************ 00:05:00.922 START TEST odd_alloc 00:05:00.922 ************************************ 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.922 02:45:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.857 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.857 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:01.857 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.120 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.120 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.120 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.120 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.120 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.120 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.120 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.120 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.120 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.120 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.120 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.120 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.120 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.120 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39197972 kB' 'MemAvailable: 43848308 kB' 'Buffers: 3728 kB' 'Cached: 16778100 kB' 'SwapCached: 0 kB' 'Active: 12823928 kB' 'Inactive: 4455520 kB' 'Active(anon): 12258096 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500768 kB' 'Mapped: 183628 kB' 'Shmem: 11760476 kB' 'KReclaimable: 245216 kB' 'Slab: 639760 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394544 kB' 'KernelStack: 12752 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 13395660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.120 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39197972 kB' 'MemAvailable: 43848308 kB' 'Buffers: 3728 kB' 'Cached: 16778120 kB' 'SwapCached: 0 kB' 'Active: 12824368 kB' 'Inactive: 4455520 kB' 'Active(anon): 12258536 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501304 kB' 'Mapped: 183688 kB' 'Shmem: 11760496 kB' 'KReclaimable: 245216 kB' 'Slab: 639760 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394544 kB' 'KernelStack: 12784 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 13395676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.121 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.122 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39198704 kB' 'MemAvailable: 43849040 kB' 'Buffers: 3728 kB' 'Cached: 16778136 kB' 'SwapCached: 0 kB' 'Active: 12824268 kB' 'Inactive: 4455520 kB' 'Active(anon): 12258436 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501124 kB' 'Mapped: 183612 kB' 'Shmem: 11760512 kB' 'KReclaimable: 245216 kB' 'Slab: 639752 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394536 kB' 'KernelStack: 12800 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 13395696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:02.125 nr_hugepages=1025 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.125 resv_hugepages=0 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.125 surplus_hugepages=0 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.125 anon_hugepages=0 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39198704 kB' 'MemAvailable: 43849040 kB' 'Buffers: 3728 kB' 'Cached: 16778176 kB' 'SwapCached: 0 kB' 'Active: 12823944 kB' 'Inactive: 4455520 kB' 'Active(anon): 12258112 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500720 kB' 'Mapped: 183612 kB' 'Shmem: 11760552 kB' 'KReclaimable: 245216 kB' 'Slab: 639752 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394536 kB' 'KernelStack: 12784 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 13395716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.126 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20062884 kB' 'MemUsed: 12767000 kB' 'SwapCached: 0 kB' 'Active: 7372280 kB' 'Inactive: 3341192 kB' 'Active(anon): 7219732 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3341192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10578368 kB' 'Mapped: 45848 kB' 'AnonPages: 138208 kB' 'Shmem: 7084628 kB' 'KernelStack: 6520 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129364 kB' 'Slab: 353316 kB' 'SReclaimable: 129364 kB' 'SUnreclaim: 223952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.388 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.389 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711848 kB' 'MemFree: 19136332 kB' 'MemUsed: 8575516 kB' 'SwapCached: 0 kB' 'Active: 5452084 kB' 'Inactive: 1114328 kB' 'Active(anon): 5038800 kB' 'Inactive(anon): 0 kB' 'Active(file): 413284 kB' 'Inactive(file): 1114328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203560 kB' 'Mapped: 137764 kB' 'AnonPages: 362916 kB' 'Shmem: 4675948 kB' 'KernelStack: 6280 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115852 kB' 'Slab: 286436 kB' 'SReclaimable: 115852 kB' 'SUnreclaim: 170584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:02.391 node0=512 expecting 513 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:02.391 node1=513 expecting 512 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:02.391 00:05:02.391 real 0m1.339s 00:05:02.391 user 0m0.567s 00:05:02.391 sys 0m0.731s 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.391 02:45:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.391 ************************************ 00:05:02.391 END TEST odd_alloc 00:05:02.391 ************************************ 00:05:02.391 02:45:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:02.391 02:45:52 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.391 02:45:52 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.391 02:45:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.391 ************************************ 00:05:02.391 START TEST custom_alloc 00:05:02.391 ************************************ 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.391 02:45:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.326 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.326 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.326 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.326 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.326 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.326 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.326 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.326 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.326 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.326 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.326 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.326 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.326 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.326 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.587 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.587 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.587 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.587 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 38159800 kB' 'MemAvailable: 42810136 kB' 'Buffers: 3728 kB' 'Cached: 16778248 kB' 'SwapCached: 0 kB' 'Active: 12824944 kB' 'Inactive: 4455520 kB' 'Active(anon): 12259112 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501636 kB' 'Mapped: 183628 kB' 'Shmem: 11760624 kB' 'KReclaimable: 245216 kB' 'Slab: 639824 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394608 kB' 'KernelStack: 12864 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 13396816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.588 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 38159780 kB' 'MemAvailable: 42810116 kB' 'Buffers: 3728 kB' 'Cached: 16778252 kB' 'SwapCached: 0 kB' 'Active: 12825884 kB' 'Inactive: 4455520 kB' 'Active(anon): 12260052 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502616 kB' 'Mapped: 183628 kB' 'Shmem: 11760628 kB' 'KReclaimable: 245216 kB' 'Slab: 639824 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394608 kB' 'KernelStack: 12912 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 13398320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.589 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.590 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 38161064 kB' 'MemAvailable: 42811400 kB' 'Buffers: 3728 kB' 'Cached: 16778268 kB' 'SwapCached: 0 kB' 'Active: 12825668 kB' 'Inactive: 4455520 kB' 'Active(anon): 12259836 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502308 kB' 'Mapped: 183628 kB' 'Shmem: 11760644 kB' 'KReclaimable: 245216 kB' 'Slab: 639820 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394604 kB' 'KernelStack: 13376 kB' 'PageTables: 10004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 13398340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197404 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.591 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.592 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:03.593 nr_hugepages=1536 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.593 resv_hugepages=0 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.593 surplus_hugepages=0 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.593 anon_hugepages=0 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 38159464 kB' 'MemAvailable: 42809800 kB' 'Buffers: 3728 kB' 'Cached: 16778292 kB' 'SwapCached: 0 kB' 'Active: 12825576 kB' 'Inactive: 4455520 kB' 'Active(anon): 12259744 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502236 kB' 'Mapped: 183628 kB' 'Shmem: 11760668 kB' 'KReclaimable: 245216 kB' 'Slab: 639804 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394588 kB' 'KernelStack: 13040 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 13398364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197340 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.594 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.595 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20052620 kB' 'MemUsed: 12777264 kB' 'SwapCached: 0 kB' 'Active: 7373972 kB' 'Inactive: 3341192 kB' 'Active(anon): 7221424 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3341192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10578380 kB' 'Mapped: 45852 kB' 'AnonPages: 139856 kB' 'Shmem: 7084640 kB' 'KernelStack: 7064 kB' 'PageTables: 5360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129364 kB' 'Slab: 353244 kB' 'SReclaimable: 129364 kB' 'SUnreclaim: 223880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.855 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.856 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711848 kB' 'MemFree: 18104272 kB' 'MemUsed: 9607576 kB' 'SwapCached: 0 kB' 'Active: 5452376 kB' 'Inactive: 1114328 kB' 'Active(anon): 5039092 kB' 'Inactive(anon): 0 kB' 'Active(file): 413284 kB' 'Inactive(file): 1114328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6203680 kB' 'Mapped: 138212 kB' 'AnonPages: 363564 kB' 'Shmem: 4676068 kB' 'KernelStack: 6280 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115852 kB' 'Slab: 286560 kB' 'SReclaimable: 115852 kB' 'SUnreclaim: 170708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.858 node0=512 expecting 512 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:03.858 node1=1024 expecting 1024 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:03.858 00:05:03.858 real 0m1.409s 00:05:03.858 user 0m0.610s 00:05:03.858 sys 0m0.757s 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.858 02:45:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.858 ************************************ 00:05:03.858 END TEST custom_alloc 00:05:03.858 ************************************ 00:05:03.858 02:45:54 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:03.858 02:45:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.858 02:45:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.858 02:45:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.858 ************************************ 00:05:03.858 START TEST no_shrink_alloc 00:05:03.858 ************************************ 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.858 02:45:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.794 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.794 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:04.795 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.795 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.795 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.795 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.795 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.795 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.795 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:04.795 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.795 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.795 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.795 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.795 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.795 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.795 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.795 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39194592 kB' 'MemAvailable: 43844928 kB' 'Buffers: 3728 kB' 'Cached: 16778376 kB' 'SwapCached: 0 kB' 'Active: 12825588 kB' 'Inactive: 4455520 kB' 'Active(anon): 12259756 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501856 kB' 'Mapped: 184160 kB' 'Shmem: 11760752 kB' 'KReclaimable: 245216 kB' 'Slab: 639836 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394620 kB' 'KernelStack: 12832 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13396044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39194592 kB' 'MemAvailable: 43844928 kB' 'Buffers: 3728 kB' 'Cached: 16778376 kB' 'SwapCached: 0 kB' 'Active: 12826108 kB' 'Inactive: 4455520 kB' 'Active(anon): 12260276 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502436 kB' 'Mapped: 184236 kB' 'Shmem: 11760752 kB' 'KReclaimable: 245216 kB' 'Slab: 639884 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394668 kB' 'KernelStack: 12816 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13396064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39194704 kB' 'MemAvailable: 43845040 kB' 'Buffers: 3728 kB' 'Cached: 16778380 kB' 'SwapCached: 0 kB' 'Active: 12825776 kB' 'Inactive: 4455520 kB' 'Active(anon): 12259944 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502508 kB' 'Mapped: 183712 kB' 'Shmem: 11760756 kB' 'KReclaimable: 245216 kB' 'Slab: 639884 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394668 kB' 'KernelStack: 12896 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13396084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.062 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.063 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.064 nr_hugepages=1024 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.064 resv_hugepages=0 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.064 surplus_hugepages=0 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.064 anon_hugepages=0 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39195140 kB' 'MemAvailable: 43845476 kB' 'Buffers: 3728 kB' 'Cached: 16778420 kB' 'SwapCached: 0 kB' 'Active: 12825300 kB' 'Inactive: 4455520 kB' 'Active(anon): 12259468 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501948 kB' 'Mapped: 183712 kB' 'Shmem: 11760796 kB' 'KReclaimable: 245216 kB' 'Slab: 639884 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394668 kB' 'KernelStack: 12816 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13396108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.064 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.065 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19017656 kB' 'MemUsed: 13812228 kB' 'SwapCached: 0 kB' 'Active: 7373500 kB' 'Inactive: 3341192 kB' 'Active(anon): 7220952 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3341192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10578460 kB' 'Mapped: 45848 kB' 'AnonPages: 139492 kB' 'Shmem: 7084720 kB' 'KernelStack: 6632 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129364 kB' 'Slab: 353456 kB' 'SReclaimable: 129364 kB' 'SUnreclaim: 224092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.066 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.067 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.067 node0=1024 expecting 1024 00:05:05.068 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.068 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:05.068 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:05.068 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:05.068 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.068 02:45:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.448 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.448 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:06.448 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.448 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.448 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.448 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.448 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.448 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.448 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.448 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.448 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.448 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.448 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.448 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.448 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.448 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.448 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.448 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39186200 kB' 'MemAvailable: 43836536 kB' 'Buffers: 3728 kB' 'Cached: 16778488 kB' 'SwapCached: 0 kB' 'Active: 12826332 kB' 'Inactive: 4455520 kB' 'Active(anon): 12260500 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502816 kB' 'Mapped: 183720 kB' 'Shmem: 11760864 kB' 'KReclaimable: 245216 kB' 'Slab: 639908 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394692 kB' 'KernelStack: 12864 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13396620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197260 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.448 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.449 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39198556 kB' 'MemAvailable: 43848892 kB' 'Buffers: 3728 kB' 'Cached: 16778488 kB' 'SwapCached: 0 kB' 'Active: 12826212 kB' 'Inactive: 4455520 kB' 'Active(anon): 12260380 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502704 kB' 'Mapped: 183720 kB' 'Shmem: 11760864 kB' 'KReclaimable: 245216 kB' 'Slab: 639884 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394668 kB' 'KernelStack: 12832 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13396636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.450 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.451 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39197864 kB' 'MemAvailable: 43848200 kB' 'Buffers: 3728 kB' 'Cached: 16778508 kB' 'SwapCached: 0 kB' 'Active: 12825440 kB' 'Inactive: 4455520 kB' 'Active(anon): 12259608 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501880 kB' 'Mapped: 183696 kB' 'Shmem: 11760884 kB' 'KReclaimable: 245216 kB' 'Slab: 639888 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394672 kB' 'KernelStack: 12832 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13396660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.452 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.453 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.454 nr_hugepages=1024 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.454 resv_hugepages=0 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.454 surplus_hugepages=0 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.454 anon_hugepages=0 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39197864 kB' 'MemAvailable: 43848200 kB' 'Buffers: 3728 kB' 'Cached: 16778532 kB' 'SwapCached: 0 kB' 'Active: 12825628 kB' 'Inactive: 4455520 kB' 'Active(anon): 12259796 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4455520 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502096 kB' 'Mapped: 183640 kB' 'Shmem: 11760908 kB' 'KReclaimable: 245216 kB' 'Slab: 639940 kB' 'SReclaimable: 245216 kB' 'SUnreclaim: 394724 kB' 'KernelStack: 12864 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 13396680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 41088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2872924 kB' 'DirectMap2M: 21164032 kB' 'DirectMap1G: 45088768 kB' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.454 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:06.455 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19007612 kB' 'MemUsed: 13822272 kB' 'SwapCached: 0 kB' 'Active: 7373120 kB' 'Inactive: 3341192 kB' 'Active(anon): 7220572 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3341192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10578564 kB' 'Mapped: 45848 kB' 'AnonPages: 138884 kB' 'Shmem: 7084824 kB' 'KernelStack: 6600 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129364 kB' 'Slab: 353536 kB' 'SReclaimable: 129364 kB' 'SUnreclaim: 224172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.456 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:06.457 node0=1024 expecting 1024 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:06.457 00:05:06.457 real 0m2.677s 00:05:06.457 user 0m1.109s 00:05:06.457 sys 0m1.487s 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.457 02:45:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 ************************************ 00:05:06.457 END TEST no_shrink_alloc 00:05:06.457 ************************************ 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:06.457 02:45:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:06.457 00:05:06.457 real 0m10.776s 00:05:06.457 user 0m4.129s 00:05:06.457 sys 0m5.417s 00:05:06.457 02:45:57 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.457 02:45:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 ************************************ 00:05:06.457 END TEST hugepages 00:05:06.457 ************************************ 00:05:06.457 02:45:57 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:06.457 02:45:57 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.457 02:45:57 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.457 02:45:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 ************************************ 00:05:06.457 START TEST driver 00:05:06.457 ************************************ 00:05:06.457 02:45:57 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:06.716 * Looking for test storage... 00:05:06.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:06.716 02:45:57 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:06.716 02:45:57 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.716 02:45:57 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.267 02:45:59 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:09.267 02:45:59 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.267 02:45:59 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.267 02:45:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:09.267 ************************************ 00:05:09.267 START TEST guess_driver 00:05:09.267 ************************************ 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:09.267 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:09.267 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:09.267 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:09.267 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:09.267 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:09.267 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:09.267 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:09.267 Looking for driver=vfio-pci 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.267 02:45:59 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.203 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.204 02:46:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:11.138 02:46:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:11.138 02:46:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:11.138 02:46:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:11.138 02:46:01 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:11.138 02:46:01 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:11.138 02:46:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.138 02:46:01 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.669 00:05:13.669 real 0m4.660s 00:05:13.669 user 0m1.028s 00:05:13.669 sys 0m1.704s 00:05:13.669 02:46:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.669 02:46:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 ************************************ 00:05:13.669 END TEST guess_driver 00:05:13.669 ************************************ 00:05:13.669 00:05:13.669 real 0m7.028s 00:05:13.669 user 0m1.543s 00:05:13.669 sys 0m2.695s 00:05:13.669 02:46:04 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.669 02:46:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 ************************************ 00:05:13.669 END TEST driver 00:05:13.669 ************************************ 00:05:13.669 02:46:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:13.669 02:46:04 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.669 02:46:04 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.669 02:46:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 ************************************ 00:05:13.669 START TEST devices 00:05:13.669 ************************************ 00:05:13.669 02:46:04 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:13.669 * Looking for test storage... 00:05:13.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:13.669 02:46:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:13.669 02:46:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:13.669 02:46:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.669 02:46:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:15.044 02:46:05 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:15.044 02:46:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:15.044 02:46:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:15.044 02:46:05 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:15.044 02:46:05 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:15.044 02:46:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:15.044 02:46:05 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:15.044 02:46:05 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:15.044 02:46:05 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:15.044 02:46:05 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:15.044 02:46:05 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:15.044 02:46:05 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:15.044 02:46:05 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:15.044 02:46:05 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:15.045 02:46:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:15.045 02:46:05 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:15.045 No valid GPT data, bailing 00:05:15.045 02:46:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:15.045 02:46:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:15.045 02:46:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:15.045 02:46:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:15.045 02:46:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:15.045 02:46:05 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:15.045 02:46:05 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:15.045 02:46:05 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.045 02:46:05 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.045 02:46:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:15.304 ************************************ 00:05:15.304 START TEST nvme_mount 00:05:15.304 ************************************ 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:15.304 02:46:05 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:16.238 Creating new GPT entries in memory. 00:05:16.238 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:16.238 other utilities. 00:05:16.238 02:46:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:16.238 02:46:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.238 02:46:06 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.238 02:46:06 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.238 02:46:06 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:17.173 Creating new GPT entries in memory. 00:05:17.173 The operation has completed successfully. 00:05:17.173 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:17.173 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.173 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 213183 00:05:17.173 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.173 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:17.173 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.173 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:17.173 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:17.173 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.432 02:46:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:18.367 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.367 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:18.625 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:18.625 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:18.625 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:18.625 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:18.625 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:18.626 02:46:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:18.626 02:46:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.626 02:46:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:18.626 02:46:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.884 02:46:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.817 02:46:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:21.189 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:21.189 00:05:21.189 real 0m5.987s 00:05:21.189 user 0m1.326s 00:05:21.189 sys 0m2.234s 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.189 02:46:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:21.189 ************************************ 00:05:21.189 END TEST nvme_mount 00:05:21.189 ************************************ 00:05:21.189 02:46:11 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:21.189 02:46:11 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.189 02:46:11 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.189 02:46:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:21.189 ************************************ 00:05:21.189 START TEST dm_mount 00:05:21.189 ************************************ 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:21.190 02:46:11 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:22.122 Creating new GPT entries in memory. 00:05:22.122 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:22.122 other utilities. 00:05:22.380 02:46:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:22.380 02:46:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.380 02:46:12 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:22.380 02:46:12 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:22.380 02:46:12 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:23.318 Creating new GPT entries in memory. 00:05:23.318 The operation has completed successfully. 00:05:23.318 02:46:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:23.318 02:46:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.318 02:46:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:23.318 02:46:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:23.318 02:46:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:24.252 The operation has completed successfully. 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 215449 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:24.253 02:46:14 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.253 02:46:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.631 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.632 02:46:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:26.619 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:26.619 02:46:17 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:26.619 00:05:26.619 real 0m5.510s 00:05:26.619 user 0m0.899s 00:05:26.619 sys 0m1.491s 00:05:26.878 02:46:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.878 02:46:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:26.878 ************************************ 00:05:26.878 END TEST dm_mount 00:05:26.878 ************************************ 00:05:26.878 02:46:17 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:26.878 02:46:17 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:26.878 02:46:17 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.878 02:46:17 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.878 02:46:17 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:26.878 02:46:17 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.878 02:46:17 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:27.140 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:27.140 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:27.140 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:27.140 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:27.140 02:46:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:27.140 02:46:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:27.140 02:46:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:27.140 02:46:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.140 02:46:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:27.140 02:46:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.140 02:46:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:27.140 00:05:27.140 real 0m13.407s 00:05:27.140 user 0m2.859s 00:05:27.140 sys 0m4.762s 00:05:27.140 02:46:17 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.140 02:46:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:27.140 ************************************ 00:05:27.140 END TEST devices 00:05:27.140 ************************************ 00:05:27.140 00:05:27.140 real 0m41.484s 00:05:27.140 user 0m11.752s 00:05:27.140 sys 0m18.030s 00:05:27.140 02:46:17 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.140 02:46:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:27.140 ************************************ 00:05:27.140 END TEST setup.sh 00:05:27.140 ************************************ 00:05:27.140 02:46:17 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:28.075 Hugepages 00:05:28.075 node hugesize free / total 00:05:28.075 node0 1048576kB 0 / 0 00:05:28.075 node0 2048kB 2048 / 2048 00:05:28.075 node1 1048576kB 0 / 0 00:05:28.075 node1 2048kB 0 / 0 00:05:28.075 00:05:28.075 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:28.075 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:28.333 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:28.333 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:28.333 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:28.333 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:28.333 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:28.333 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:28.333 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:28.333 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:28.333 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:28.333 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:28.333 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:28.333 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:28.333 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:28.333 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:28.333 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:28.333 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:28.333 02:46:18 -- spdk/autotest.sh@130 -- # uname -s 00:05:28.333 02:46:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:28.334 02:46:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:28.334 02:46:18 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.268 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.268 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.268 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.268 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.268 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.268 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.268 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.268 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:29.268 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.268 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.268 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.268 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.268 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.268 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.268 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.268 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:30.203 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:30.462 02:46:21 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:31.397 02:46:22 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:31.397 02:46:22 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:31.397 02:46:22 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:31.397 02:46:22 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:31.397 02:46:22 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:31.397 02:46:22 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:31.397 02:46:22 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.397 02:46:22 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:31.397 02:46:22 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:31.655 02:46:22 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:31.655 02:46:22 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:31.655 02:46:22 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:32.590 Waiting for block devices as requested 00:05:32.590 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:32.848 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:32.848 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:32.848 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:32.848 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:33.105 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:33.105 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:33.105 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:33.105 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:33.362 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:33.362 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:33.362 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:33.362 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:33.620 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:33.620 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:33.620 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:33.620 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:33.877 02:46:24 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:33.877 02:46:24 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:33.877 02:46:24 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:33.878 02:46:24 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:33.878 02:46:24 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:33.878 02:46:24 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:33.878 02:46:24 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:33.878 02:46:24 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:33.878 02:46:24 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:33.878 02:46:24 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:33.878 02:46:24 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:33.878 02:46:24 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:33.878 02:46:24 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:33.878 02:46:24 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:33.878 02:46:24 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:33.878 02:46:24 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:33.878 02:46:24 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:33.878 02:46:24 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:33.878 02:46:24 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:33.878 02:46:24 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:33.878 02:46:24 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:33.878 02:46:24 -- common/autotest_common.sh@1553 -- # continue 00:05:33.878 02:46:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:33.878 02:46:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.878 02:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:33.878 02:46:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:33.878 02:46:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:33.878 02:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:33.878 02:46:24 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:35.251 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:35.251 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:35.251 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:35.251 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:35.251 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:35.251 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:35.251 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:35.251 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:35.251 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:35.251 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:35.251 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:35.251 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:35.251 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:35.251 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:35.251 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:35.251 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:36.187 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:36.187 02:46:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:36.187 02:46:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.187 02:46:26 -- common/autotest_common.sh@10 -- # set +x 00:05:36.187 02:46:26 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:36.187 02:46:26 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:36.187 02:46:26 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.187 02:46:26 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:36.187 02:46:26 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:36.187 02:46:26 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:36.187 02:46:26 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:36.187 02:46:26 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:36.187 02:46:26 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.187 02:46:26 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:36.187 02:46:26 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:36.187 02:46:26 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:36.187 02:46:26 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:36.187 02:46:26 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:36.187 02:46:26 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:36.187 02:46:26 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:36.187 02:46:26 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:36.187 02:46:26 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:36.187 02:46:26 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:36.187 02:46:26 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:36.187 02:46:26 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=220614 00:05:36.187 02:46:26 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.187 02:46:26 -- common/autotest_common.sh@1594 -- # waitforlisten 220614 00:05:36.187 02:46:26 -- common/autotest_common.sh@827 -- # '[' -z 220614 ']' 00:05:36.187 02:46:26 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.187 02:46:26 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.187 02:46:26 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.187 02:46:26 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.187 02:46:26 -- common/autotest_common.sh@10 -- # set +x 00:05:36.187 [2024-05-13 02:46:26.972254] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:36.187 [2024-05-13 02:46:26.972351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220614 ] 00:05:36.446 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.446 [2024-05-13 02:46:27.004021] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.446 [2024-05-13 02:46:27.035833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.446 [2024-05-13 02:46:27.126228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.705 02:46:27 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.705 02:46:27 -- common/autotest_common.sh@860 -- # return 0 00:05:36.705 02:46:27 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:36.705 02:46:27 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:36.705 02:46:27 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:39.986 nvme0n1 00:05:39.986 02:46:30 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:39.986 [2024-05-13 02:46:30.686676] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:39.986 [2024-05-13 02:46:30.686734] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:39.986 request: 00:05:39.986 { 00:05:39.986 "nvme_ctrlr_name": "nvme0", 00:05:39.986 "password": "test", 00:05:39.986 "method": "bdev_nvme_opal_revert", 00:05:39.986 "req_id": 1 00:05:39.986 } 00:05:39.986 Got JSON-RPC error response 00:05:39.986 response: 00:05:39.986 { 00:05:39.986 "code": -32603, 00:05:39.986 "message": "Internal error" 00:05:39.986 } 00:05:39.987 02:46:30 -- common/autotest_common.sh@1600 -- # true 00:05:39.987 02:46:30 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:39.987 02:46:30 -- common/autotest_common.sh@1604 -- # killprocess 220614 00:05:39.987 02:46:30 -- common/autotest_common.sh@946 -- # '[' -z 220614 ']' 00:05:39.987 02:46:30 -- common/autotest_common.sh@950 -- # kill -0 220614 00:05:39.987 02:46:30 -- common/autotest_common.sh@951 -- # uname 00:05:39.987 02:46:30 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.987 02:46:30 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 220614 00:05:39.987 02:46:30 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:39.987 02:46:30 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:39.987 02:46:30 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 220614' 00:05:39.987 killing process with pid 220614 00:05:39.987 02:46:30 -- common/autotest_common.sh@965 -- # kill 220614 00:05:39.987 02:46:30 -- common/autotest_common.sh@970 -- # wait 220614 00:05:41.891 02:46:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:41.891 02:46:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:41.891 02:46:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:41.891 02:46:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:41.891 02:46:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:41.891 02:46:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.891 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.891 02:46:32 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:41.891 02:46:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.891 02:46:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.891 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.891 ************************************ 00:05:41.891 START TEST env 00:05:41.891 ************************************ 00:05:41.891 02:46:32 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:41.891 * Looking for test storage... 00:05:41.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:41.891 02:46:32 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:41.891 02:46:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.891 02:46:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.891 02:46:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.891 ************************************ 00:05:41.891 START TEST env_memory 00:05:41.891 ************************************ 00:05:41.891 02:46:32 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:41.891 00:05:41.891 00:05:41.891 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.891 http://cunit.sourceforge.net/ 00:05:41.891 00:05:41.891 00:05:41.891 Suite: memory 00:05:41.891 Test: alloc and free memory map ...[2024-05-13 02:46:32.657792] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:41.891 passed 00:05:41.891 Test: mem map translation ...[2024-05-13 02:46:32.682369] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:41.891 [2024-05-13 02:46:32.682396] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:41.891 [2024-05-13 02:46:32.682448] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:41.891 [2024-05-13 02:46:32.682462] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:42.149 passed 00:05:42.149 Test: mem map registration ...[2024-05-13 02:46:32.734721] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:42.149 [2024-05-13 02:46:32.734751] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:42.149 passed 00:05:42.149 Test: mem map adjacent registrations ...passed 00:05:42.149 00:05:42.149 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.149 suites 1 1 n/a 0 0 00:05:42.149 tests 4 4 4 0 0 00:05:42.149 asserts 152 152 152 0 n/a 00:05:42.149 00:05:42.149 Elapsed time = 0.166 seconds 00:05:42.149 00:05:42.149 real 0m0.173s 00:05:42.149 user 0m0.167s 00:05:42.149 sys 0m0.005s 00:05:42.149 02:46:32 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.149 02:46:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:42.149 ************************************ 00:05:42.149 END TEST env_memory 00:05:42.149 ************************************ 00:05:42.149 02:46:32 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:42.149 02:46:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.149 02:46:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.149 02:46:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.149 ************************************ 00:05:42.149 START TEST env_vtophys 00:05:42.149 ************************************ 00:05:42.149 02:46:32 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:42.149 EAL: lib.eal log level changed from notice to debug 00:05:42.149 EAL: Detected lcore 0 as core 0 on socket 0 00:05:42.149 EAL: Detected lcore 1 as core 1 on socket 0 00:05:42.149 EAL: Detected lcore 2 as core 2 on socket 0 00:05:42.149 EAL: Detected lcore 3 as core 3 on socket 0 00:05:42.149 EAL: Detected lcore 4 as core 4 on socket 0 00:05:42.149 EAL: Detected lcore 5 as core 5 on socket 0 00:05:42.149 EAL: Detected lcore 6 as core 8 on socket 0 00:05:42.149 EAL: Detected lcore 7 as core 9 on socket 0 00:05:42.149 EAL: Detected lcore 8 as core 10 on socket 0 00:05:42.149 EAL: Detected lcore 9 as core 11 on socket 0 00:05:42.149 EAL: Detected lcore 10 as core 12 on socket 0 00:05:42.149 EAL: Detected lcore 11 as core 13 on socket 0 00:05:42.149 EAL: Detected lcore 12 as core 0 on socket 1 00:05:42.149 EAL: Detected lcore 13 as core 1 on socket 1 00:05:42.149 EAL: Detected lcore 14 as core 2 on socket 1 00:05:42.149 EAL: Detected lcore 15 as core 3 on socket 1 00:05:42.149 EAL: Detected lcore 16 as core 4 on socket 1 00:05:42.149 EAL: Detected lcore 17 as core 5 on socket 1 00:05:42.149 EAL: Detected lcore 18 as core 8 on socket 1 00:05:42.149 EAL: Detected lcore 19 as core 9 on socket 1 00:05:42.149 EAL: Detected lcore 20 as core 10 on socket 1 00:05:42.149 EAL: Detected lcore 21 as core 11 on socket 1 00:05:42.149 EAL: Detected lcore 22 as core 12 on socket 1 00:05:42.149 EAL: Detected lcore 23 as core 13 on socket 1 00:05:42.149 EAL: Detected lcore 24 as core 0 on socket 0 00:05:42.149 EAL: Detected lcore 25 as core 1 on socket 0 00:05:42.149 EAL: Detected lcore 26 as core 2 on socket 0 00:05:42.149 EAL: Detected lcore 27 as core 3 on socket 0 00:05:42.149 EAL: Detected lcore 28 as core 4 on socket 0 00:05:42.149 EAL: Detected lcore 29 as core 5 on socket 0 00:05:42.149 EAL: Detected lcore 30 as core 8 on socket 0 00:05:42.149 EAL: Detected lcore 31 as core 9 on socket 0 00:05:42.149 EAL: Detected lcore 32 as core 10 on socket 0 00:05:42.149 EAL: Detected lcore 33 as core 11 on socket 0 00:05:42.149 EAL: Detected lcore 34 as core 12 on socket 0 00:05:42.149 EAL: Detected lcore 35 as core 13 on socket 0 00:05:42.149 EAL: Detected lcore 36 as core 0 on socket 1 00:05:42.149 EAL: Detected lcore 37 as core 1 on socket 1 00:05:42.149 EAL: Detected lcore 38 as core 2 on socket 1 00:05:42.149 EAL: Detected lcore 39 as core 3 on socket 1 00:05:42.150 EAL: Detected lcore 40 as core 4 on socket 1 00:05:42.150 EAL: Detected lcore 41 as core 5 on socket 1 00:05:42.150 EAL: Detected lcore 42 as core 8 on socket 1 00:05:42.150 EAL: Detected lcore 43 as core 9 on socket 1 00:05:42.150 EAL: Detected lcore 44 as core 10 on socket 1 00:05:42.150 EAL: Detected lcore 45 as core 11 on socket 1 00:05:42.150 EAL: Detected lcore 46 as core 12 on socket 1 00:05:42.150 EAL: Detected lcore 47 as core 13 on socket 1 00:05:42.150 EAL: Maximum logical cores by configuration: 128 00:05:42.150 EAL: Detected CPU lcores: 48 00:05:42.150 EAL: Detected NUMA nodes: 2 00:05:42.150 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:05:42.150 EAL: Detected shared linkage of DPDK 00:05:42.150 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:05:42.150 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:05:42.150 EAL: Registered [vdev] bus. 00:05:42.150 EAL: bus.vdev log level changed from disabled to notice 00:05:42.150 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:05:42.150 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:05:42.150 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:42.150 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:42.150 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:05:42.150 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:05:42.150 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:05:42.150 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:05:42.150 EAL: No shared files mode enabled, IPC will be disabled 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Bus pci wants IOVA as 'DC' 00:05:42.150 EAL: Bus vdev wants IOVA as 'DC' 00:05:42.150 EAL: Buses did not request a specific IOVA mode. 00:05:42.150 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:42.150 EAL: Selected IOVA mode 'VA' 00:05:42.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.150 EAL: Probing VFIO support... 00:05:42.150 EAL: IOMMU type 1 (Type 1) is supported 00:05:42.150 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:42.150 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:42.150 EAL: VFIO support initialized 00:05:42.150 EAL: Ask a virtual area of 0x2e000 bytes 00:05:42.150 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:42.150 EAL: Setting up physically contiguous memory... 00:05:42.150 EAL: Setting maximum number of open files to 524288 00:05:42.150 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:42.150 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:42.150 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:42.150 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.150 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:42.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.150 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.150 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:42.150 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:42.150 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.150 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:42.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.150 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.150 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:42.150 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:42.150 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.150 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:42.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.150 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.150 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:42.150 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:42.150 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.150 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:42.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.150 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.150 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:42.150 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:42.150 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:42.150 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.150 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:42.150 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.150 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.150 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:42.150 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:42.150 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.150 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:42.150 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.150 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.150 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:42.150 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:42.150 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.150 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:42.150 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.150 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.150 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:42.150 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:42.150 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.150 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:42.150 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.150 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.150 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:42.150 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:42.150 EAL: Hugepages will be freed exactly as allocated. 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: TSC frequency is ~2700000 KHz 00:05:42.150 EAL: Main lcore 0 is ready (tid=7f00e02e8a00;cpuset=[0]) 00:05:42.150 EAL: Trying to obtain current memory policy. 00:05:42.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.150 EAL: Restoring previous memory policy: 0 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was expanded by 2MB 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:42.150 EAL: Mem event callback 'spdk:(nil)' registered 00:05:42.150 00:05:42.150 00:05:42.150 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.150 http://cunit.sourceforge.net/ 00:05:42.150 00:05:42.150 00:05:42.150 Suite: components_suite 00:05:42.150 Test: vtophys_malloc_test ...passed 00:05:42.150 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:42.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.150 EAL: Restoring previous memory policy: 4 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was expanded by 4MB 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was shrunk by 4MB 00:05:42.150 EAL: Trying to obtain current memory policy. 00:05:42.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.150 EAL: Restoring previous memory policy: 4 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was expanded by 6MB 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was shrunk by 6MB 00:05:42.150 EAL: Trying to obtain current memory policy. 00:05:42.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.150 EAL: Restoring previous memory policy: 4 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was expanded by 10MB 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was shrunk by 10MB 00:05:42.150 EAL: Trying to obtain current memory policy. 00:05:42.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.150 EAL: Restoring previous memory policy: 4 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was expanded by 18MB 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was shrunk by 18MB 00:05:42.150 EAL: Trying to obtain current memory policy. 00:05:42.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.150 EAL: Restoring previous memory policy: 4 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.150 EAL: request: mp_malloc_sync 00:05:42.150 EAL: No shared files mode enabled, IPC is disabled 00:05:42.150 EAL: Heap on socket 0 was expanded by 34MB 00:05:42.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.408 EAL: request: mp_malloc_sync 00:05:42.408 EAL: No shared files mode enabled, IPC is disabled 00:05:42.408 EAL: Heap on socket 0 was shrunk by 34MB 00:05:42.408 EAL: Trying to obtain current memory policy. 00:05:42.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.408 EAL: Restoring previous memory policy: 4 00:05:42.408 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.408 EAL: request: mp_malloc_sync 00:05:42.408 EAL: No shared files mode enabled, IPC is disabled 00:05:42.408 EAL: Heap on socket 0 was expanded by 66MB 00:05:42.408 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.408 EAL: request: mp_malloc_sync 00:05:42.408 EAL: No shared files mode enabled, IPC is disabled 00:05:42.408 EAL: Heap on socket 0 was shrunk by 66MB 00:05:42.408 EAL: Trying to obtain current memory policy. 00:05:42.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.408 EAL: Restoring previous memory policy: 4 00:05:42.408 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.408 EAL: request: mp_malloc_sync 00:05:42.408 EAL: No shared files mode enabled, IPC is disabled 00:05:42.408 EAL: Heap on socket 0 was expanded by 130MB 00:05:42.408 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.408 EAL: request: mp_malloc_sync 00:05:42.408 EAL: No shared files mode enabled, IPC is disabled 00:05:42.408 EAL: Heap on socket 0 was shrunk by 130MB 00:05:42.408 EAL: Trying to obtain current memory policy. 00:05:42.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.408 EAL: Restoring previous memory policy: 4 00:05:42.408 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.408 EAL: request: mp_malloc_sync 00:05:42.408 EAL: No shared files mode enabled, IPC is disabled 00:05:42.408 EAL: Heap on socket 0 was expanded by 258MB 00:05:42.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.665 EAL: request: mp_malloc_sync 00:05:42.665 EAL: No shared files mode enabled, IPC is disabled 00:05:42.665 EAL: Heap on socket 0 was shrunk by 258MB 00:05:42.665 EAL: Trying to obtain current memory policy. 00:05:42.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.665 EAL: Restoring previous memory policy: 4 00:05:42.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.665 EAL: request: mp_malloc_sync 00:05:42.665 EAL: No shared files mode enabled, IPC is disabled 00:05:42.665 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.923 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.923 EAL: request: mp_malloc_sync 00:05:42.923 EAL: No shared files mode enabled, IPC is disabled 00:05:42.923 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.923 EAL: Trying to obtain current memory policy. 00:05:42.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.180 EAL: Restoring previous memory policy: 4 00:05:43.180 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.180 EAL: request: mp_malloc_sync 00:05:43.180 EAL: No shared files mode enabled, IPC is disabled 00:05:43.180 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.437 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.695 EAL: request: mp_malloc_sync 00:05:43.695 EAL: No shared files mode enabled, IPC is disabled 00:05:43.695 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.695 passed 00:05:43.695 00:05:43.695 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.695 suites 1 1 n/a 0 0 00:05:43.695 tests 2 2 2 0 0 00:05:43.695 asserts 497 497 497 0 n/a 00:05:43.695 00:05:43.695 Elapsed time = 1.372 seconds 00:05:43.695 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.695 EAL: request: mp_malloc_sync 00:05:43.695 EAL: No shared files mode enabled, IPC is disabled 00:05:43.695 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.695 EAL: No shared files mode enabled, IPC is disabled 00:05:43.695 EAL: No shared files mode enabled, IPC is disabled 00:05:43.695 EAL: No shared files mode enabled, IPC is disabled 00:05:43.695 00:05:43.695 real 0m1.490s 00:05:43.695 user 0m0.855s 00:05:43.695 sys 0m0.603s 00:05:43.695 02:46:34 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.695 02:46:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.695 ************************************ 00:05:43.695 END TEST env_vtophys 00:05:43.695 ************************************ 00:05:43.695 02:46:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.695 02:46:34 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.695 02:46:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.695 02:46:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.695 ************************************ 00:05:43.695 START TEST env_pci 00:05:43.695 ************************************ 00:05:43.695 02:46:34 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.695 00:05:43.695 00:05:43.695 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.695 http://cunit.sourceforge.net/ 00:05:43.695 00:05:43.695 00:05:43.695 Suite: pci 00:05:43.695 Test: pci_hook ...[2024-05-13 02:46:34.403996] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 221509 has claimed it 00:05:43.695 EAL: Cannot find device (10000:00:01.0) 00:05:43.695 EAL: Failed to attach device on primary process 00:05:43.695 passed 00:05:43.695 00:05:43.695 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.695 suites 1 1 n/a 0 0 00:05:43.695 tests 1 1 1 0 0 00:05:43.695 asserts 25 25 25 0 n/a 00:05:43.695 00:05:43.695 Elapsed time = 0.021 seconds 00:05:43.695 00:05:43.695 real 0m0.034s 00:05:43.695 user 0m0.012s 00:05:43.695 sys 0m0.022s 00:05:43.695 02:46:34 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.695 02:46:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.695 ************************************ 00:05:43.695 END TEST env_pci 00:05:43.695 ************************************ 00:05:43.695 02:46:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.695 02:46:34 env -- env/env.sh@15 -- # uname 00:05:43.695 02:46:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.695 02:46:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.695 02:46:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.695 02:46:34 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:43.695 02:46:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.695 02:46:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.695 ************************************ 00:05:43.695 START TEST env_dpdk_post_init 00:05:43.695 ************************************ 00:05:43.695 02:46:34 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.953 EAL: Detected CPU lcores: 48 00:05:43.953 EAL: Detected NUMA nodes: 2 00:05:43.953 EAL: Detected shared linkage of DPDK 00:05:43.953 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.953 EAL: Selected IOVA mode 'VA' 00:05:43.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.953 EAL: VFIO support initialized 00:05:43.953 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.953 EAL: Using IOMMU type 1 (Type 1) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:43.953 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:44.210 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:44.210 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:44.776 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:48.058 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:48.058 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:48.317 Starting DPDK initialization... 00:05:48.317 Starting SPDK post initialization... 00:05:48.317 SPDK NVMe probe 00:05:48.317 Attaching to 0000:88:00.0 00:05:48.317 Attached to 0000:88:00.0 00:05:48.317 Cleaning up... 00:05:48.317 00:05:48.317 real 0m4.408s 00:05:48.317 user 0m3.272s 00:05:48.317 sys 0m0.193s 00:05:48.317 02:46:38 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.317 02:46:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.317 ************************************ 00:05:48.317 END TEST env_dpdk_post_init 00:05:48.317 ************************************ 00:05:48.317 02:46:38 env -- env/env.sh@26 -- # uname 00:05:48.317 02:46:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:48.317 02:46:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.317 02:46:38 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.317 02:46:38 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.317 02:46:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.317 ************************************ 00:05:48.317 START TEST env_mem_callbacks 00:05:48.317 ************************************ 00:05:48.317 02:46:38 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.317 EAL: Detected CPU lcores: 48 00:05:48.317 EAL: Detected NUMA nodes: 2 00:05:48.317 EAL: Detected shared linkage of DPDK 00:05:48.317 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.317 EAL: Selected IOVA mode 'VA' 00:05:48.317 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.317 EAL: VFIO support initialized 00:05:48.317 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.317 00:05:48.317 00:05:48.317 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.317 http://cunit.sourceforge.net/ 00:05:48.317 00:05:48.317 00:05:48.317 Suite: memory 00:05:48.317 Test: test ... 00:05:48.317 register 0x200000200000 2097152 00:05:48.317 malloc 3145728 00:05:48.317 register 0x200000400000 4194304 00:05:48.317 buf 0x200000500000 len 3145728 PASSED 00:05:48.317 malloc 64 00:05:48.317 buf 0x2000004fff40 len 64 PASSED 00:05:48.317 malloc 4194304 00:05:48.317 register 0x200000800000 6291456 00:05:48.317 buf 0x200000a00000 len 4194304 PASSED 00:05:48.317 free 0x200000500000 3145728 00:05:48.317 free 0x2000004fff40 64 00:05:48.317 unregister 0x200000400000 4194304 PASSED 00:05:48.317 free 0x200000a00000 4194304 00:05:48.317 unregister 0x200000800000 6291456 PASSED 00:05:48.317 malloc 8388608 00:05:48.317 register 0x200000400000 10485760 00:05:48.317 buf 0x200000600000 len 8388608 PASSED 00:05:48.317 free 0x200000600000 8388608 00:05:48.317 unregister 0x200000400000 10485760 PASSED 00:05:48.317 passed 00:05:48.317 00:05:48.317 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.317 suites 1 1 n/a 0 0 00:05:48.317 tests 1 1 1 0 0 00:05:48.317 asserts 15 15 15 0 n/a 00:05:48.317 00:05:48.317 Elapsed time = 0.005 seconds 00:05:48.317 00:05:48.317 real 0m0.050s 00:05:48.317 user 0m0.012s 00:05:48.317 sys 0m0.038s 00:05:48.317 02:46:38 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.317 02:46:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:48.317 ************************************ 00:05:48.317 END TEST env_mem_callbacks 00:05:48.317 ************************************ 00:05:48.317 00:05:48.317 real 0m6.466s 00:05:48.317 user 0m4.429s 00:05:48.317 sys 0m1.068s 00:05:48.317 02:46:39 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.317 02:46:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.317 ************************************ 00:05:48.317 END TEST env 00:05:48.317 ************************************ 00:05:48.317 02:46:39 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:48.317 02:46:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.317 02:46:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.317 02:46:39 -- common/autotest_common.sh@10 -- # set +x 00:05:48.317 ************************************ 00:05:48.317 START TEST rpc 00:05:48.317 ************************************ 00:05:48.317 02:46:39 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:48.317 * Looking for test storage... 00:05:48.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.317 02:46:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=222158 00:05:48.317 02:46:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:48.317 02:46:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.317 02:46:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 222158 00:05:48.317 02:46:39 rpc -- common/autotest_common.sh@827 -- # '[' -z 222158 ']' 00:05:48.317 02:46:39 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.317 02:46:39 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.317 02:46:39 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.317 02:46:39 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.317 02:46:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.576 [2024-05-13 02:46:39.159043] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:48.576 [2024-05-13 02:46:39.159147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222158 ] 00:05:48.576 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.576 [2024-05-13 02:46:39.193425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.576 [2024-05-13 02:46:39.221729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.576 [2024-05-13 02:46:39.307519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:48.576 [2024-05-13 02:46:39.307572] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 222158' to capture a snapshot of events at runtime. 00:05:48.576 [2024-05-13 02:46:39.307599] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.576 [2024-05-13 02:46:39.307611] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.576 [2024-05-13 02:46:39.307621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid222158 for offline analysis/debug. 00:05:48.576 [2024-05-13 02:46:39.307647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.834 02:46:39 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.834 02:46:39 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:48.834 02:46:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.834 02:46:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.834 02:46:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:48.834 02:46:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:48.834 02:46:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.834 02:46:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.834 02:46:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.834 ************************************ 00:05:48.834 START TEST rpc_integrity 00:05:48.834 ************************************ 00:05:48.834 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:48.834 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.834 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.834 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.834 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.834 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.834 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.834 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.834 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.834 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.834 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.093 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:49.093 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.093 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.093 { 00:05:49.093 "name": "Malloc0", 00:05:49.093 "aliases": [ 00:05:49.093 "2ea4b6ce-503b-4bdf-824c-55e0b4d9d8f7" 00:05:49.093 ], 00:05:49.093 "product_name": "Malloc disk", 00:05:49.093 "block_size": 512, 00:05:49.093 "num_blocks": 16384, 00:05:49.093 "uuid": "2ea4b6ce-503b-4bdf-824c-55e0b4d9d8f7", 00:05:49.093 "assigned_rate_limits": { 00:05:49.093 "rw_ios_per_sec": 0, 00:05:49.093 "rw_mbytes_per_sec": 0, 00:05:49.093 "r_mbytes_per_sec": 0, 00:05:49.093 "w_mbytes_per_sec": 0 00:05:49.093 }, 00:05:49.093 "claimed": false, 00:05:49.093 "zoned": false, 00:05:49.093 "supported_io_types": { 00:05:49.093 "read": true, 00:05:49.093 "write": true, 00:05:49.093 "unmap": true, 00:05:49.093 "write_zeroes": true, 00:05:49.093 "flush": true, 00:05:49.093 "reset": true, 00:05:49.093 "compare": false, 00:05:49.093 "compare_and_write": false, 00:05:49.093 "abort": true, 00:05:49.093 "nvme_admin": false, 00:05:49.093 "nvme_io": false 00:05:49.093 }, 00:05:49.093 "memory_domains": [ 00:05:49.093 { 00:05:49.093 "dma_device_id": "system", 00:05:49.093 "dma_device_type": 1 00:05:49.093 }, 00:05:49.093 { 00:05:49.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.093 "dma_device_type": 2 00:05:49.093 } 00:05:49.093 ], 00:05:49.093 "driver_specific": {} 00:05:49.093 } 00:05:49.093 ]' 00:05:49.093 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.093 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.093 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.093 [2024-05-13 02:46:39.695158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:49.093 [2024-05-13 02:46:39.695203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.093 [2024-05-13 02:46:39.695228] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x184dda0 00:05:49.093 [2024-05-13 02:46:39.695243] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.093 [2024-05-13 02:46:39.696782] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.093 [2024-05-13 02:46:39.696807] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.093 Passthru0 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.093 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.093 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.093 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.093 { 00:05:49.093 "name": "Malloc0", 00:05:49.093 "aliases": [ 00:05:49.093 "2ea4b6ce-503b-4bdf-824c-55e0b4d9d8f7" 00:05:49.093 ], 00:05:49.093 "product_name": "Malloc disk", 00:05:49.093 "block_size": 512, 00:05:49.093 "num_blocks": 16384, 00:05:49.093 "uuid": "2ea4b6ce-503b-4bdf-824c-55e0b4d9d8f7", 00:05:49.093 "assigned_rate_limits": { 00:05:49.093 "rw_ios_per_sec": 0, 00:05:49.093 "rw_mbytes_per_sec": 0, 00:05:49.093 "r_mbytes_per_sec": 0, 00:05:49.093 "w_mbytes_per_sec": 0 00:05:49.093 }, 00:05:49.093 "claimed": true, 00:05:49.093 "claim_type": "exclusive_write", 00:05:49.093 "zoned": false, 00:05:49.093 "supported_io_types": { 00:05:49.093 "read": true, 00:05:49.093 "write": true, 00:05:49.093 "unmap": true, 00:05:49.093 "write_zeroes": true, 00:05:49.093 "flush": true, 00:05:49.093 "reset": true, 00:05:49.093 "compare": false, 00:05:49.093 "compare_and_write": false, 00:05:49.093 "abort": true, 00:05:49.093 "nvme_admin": false, 00:05:49.093 "nvme_io": false 00:05:49.093 }, 00:05:49.093 "memory_domains": [ 00:05:49.093 { 00:05:49.093 "dma_device_id": "system", 00:05:49.093 "dma_device_type": 1 00:05:49.093 }, 00:05:49.093 { 00:05:49.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.093 "dma_device_type": 2 00:05:49.093 } 00:05:49.093 ], 00:05:49.093 "driver_specific": {} 00:05:49.093 }, 00:05:49.093 { 00:05:49.093 "name": "Passthru0", 00:05:49.093 "aliases": [ 00:05:49.093 "0534499e-0186-5603-a8ce-949517731cd0" 00:05:49.093 ], 00:05:49.093 "product_name": "passthru", 00:05:49.093 "block_size": 512, 00:05:49.093 "num_blocks": 16384, 00:05:49.093 "uuid": "0534499e-0186-5603-a8ce-949517731cd0", 00:05:49.093 "assigned_rate_limits": { 00:05:49.093 "rw_ios_per_sec": 0, 00:05:49.093 "rw_mbytes_per_sec": 0, 00:05:49.093 "r_mbytes_per_sec": 0, 00:05:49.093 "w_mbytes_per_sec": 0 00:05:49.093 }, 00:05:49.093 "claimed": false, 00:05:49.093 "zoned": false, 00:05:49.093 "supported_io_types": { 00:05:49.093 "read": true, 00:05:49.093 "write": true, 00:05:49.093 "unmap": true, 00:05:49.093 "write_zeroes": true, 00:05:49.093 "flush": true, 00:05:49.093 "reset": true, 00:05:49.093 "compare": false, 00:05:49.093 "compare_and_write": false, 00:05:49.093 "abort": true, 00:05:49.093 "nvme_admin": false, 00:05:49.093 "nvme_io": false 00:05:49.093 }, 00:05:49.093 "memory_domains": [ 00:05:49.093 { 00:05:49.093 "dma_device_id": "system", 00:05:49.093 "dma_device_type": 1 00:05:49.094 }, 00:05:49.094 { 00:05:49.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.094 "dma_device_type": 2 00:05:49.094 } 00:05:49.094 ], 00:05:49.094 "driver_specific": { 00:05:49.094 "passthru": { 00:05:49.094 "name": "Passthru0", 00:05:49.094 "base_bdev_name": "Malloc0" 00:05:49.094 } 00:05:49.094 } 00:05:49.094 } 00:05:49.094 ]' 00:05:49.094 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.094 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.094 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.094 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.094 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.094 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.094 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.094 02:46:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.094 00:05:49.094 real 0m0.227s 00:05:49.094 user 0m0.149s 00:05:49.094 sys 0m0.022s 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.094 02:46:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 ************************************ 00:05:49.094 END TEST rpc_integrity 00:05:49.094 ************************************ 00:05:49.094 02:46:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:49.094 02:46:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.094 02:46:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.094 02:46:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 ************************************ 00:05:49.094 START TEST rpc_plugins 00:05:49.094 ************************************ 00:05:49.094 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:49.094 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:49.094 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.094 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.094 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:49.094 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:49.094 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.094 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.094 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:49.094 { 00:05:49.094 "name": "Malloc1", 00:05:49.094 "aliases": [ 00:05:49.094 "231d938d-36d1-4f9f-942f-6e2f8f8eb872" 00:05:49.094 ], 00:05:49.094 "product_name": "Malloc disk", 00:05:49.094 "block_size": 4096, 00:05:49.094 "num_blocks": 256, 00:05:49.094 "uuid": "231d938d-36d1-4f9f-942f-6e2f8f8eb872", 00:05:49.094 "assigned_rate_limits": { 00:05:49.094 "rw_ios_per_sec": 0, 00:05:49.094 "rw_mbytes_per_sec": 0, 00:05:49.094 "r_mbytes_per_sec": 0, 00:05:49.094 "w_mbytes_per_sec": 0 00:05:49.094 }, 00:05:49.094 "claimed": false, 00:05:49.094 "zoned": false, 00:05:49.094 "supported_io_types": { 00:05:49.094 "read": true, 00:05:49.094 "write": true, 00:05:49.094 "unmap": true, 00:05:49.094 "write_zeroes": true, 00:05:49.094 "flush": true, 00:05:49.094 "reset": true, 00:05:49.094 "compare": false, 00:05:49.094 "compare_and_write": false, 00:05:49.094 "abort": true, 00:05:49.094 "nvme_admin": false, 00:05:49.094 "nvme_io": false 00:05:49.094 }, 00:05:49.094 "memory_domains": [ 00:05:49.094 { 00:05:49.094 "dma_device_id": "system", 00:05:49.094 "dma_device_type": 1 00:05:49.094 }, 00:05:49.094 { 00:05:49.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.094 "dma_device_type": 2 00:05:49.094 } 00:05:49.094 ], 00:05:49.094 "driver_specific": {} 00:05:49.094 } 00:05:49.094 ]' 00:05:49.094 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:49.352 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:49.352 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:49.352 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.352 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.352 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.353 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:49.353 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.353 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.353 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.353 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:49.353 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:49.353 02:46:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:49.353 00:05:49.353 real 0m0.115s 00:05:49.353 user 0m0.079s 00:05:49.353 sys 0m0.006s 00:05:49.353 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.353 02:46:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.353 ************************************ 00:05:49.353 END TEST rpc_plugins 00:05:49.353 ************************************ 00:05:49.353 02:46:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:49.353 02:46:40 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.353 02:46:40 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.353 02:46:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.353 ************************************ 00:05:49.353 START TEST rpc_trace_cmd_test 00:05:49.353 ************************************ 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:49.353 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid222158", 00:05:49.353 "tpoint_group_mask": "0x8", 00:05:49.353 "iscsi_conn": { 00:05:49.353 "mask": "0x2", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "scsi": { 00:05:49.353 "mask": "0x4", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "bdev": { 00:05:49.353 "mask": "0x8", 00:05:49.353 "tpoint_mask": "0xffffffffffffffff" 00:05:49.353 }, 00:05:49.353 "nvmf_rdma": { 00:05:49.353 "mask": "0x10", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "nvmf_tcp": { 00:05:49.353 "mask": "0x20", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "ftl": { 00:05:49.353 "mask": "0x40", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "blobfs": { 00:05:49.353 "mask": "0x80", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "dsa": { 00:05:49.353 "mask": "0x200", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "thread": { 00:05:49.353 "mask": "0x400", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "nvme_pcie": { 00:05:49.353 "mask": "0x800", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "iaa": { 00:05:49.353 "mask": "0x1000", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "nvme_tcp": { 00:05:49.353 "mask": "0x2000", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "bdev_nvme": { 00:05:49.353 "mask": "0x4000", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 }, 00:05:49.353 "sock": { 00:05:49.353 "mask": "0x8000", 00:05:49.353 "tpoint_mask": "0x0" 00:05:49.353 } 00:05:49.353 }' 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.353 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.611 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.611 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.611 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.612 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:49.612 02:46:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:49.612 00:05:49.612 real 0m0.201s 00:05:49.612 user 0m0.177s 00:05:49.612 sys 0m0.015s 00:05:49.612 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.612 02:46:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.612 ************************************ 00:05:49.612 END TEST rpc_trace_cmd_test 00:05:49.612 ************************************ 00:05:49.612 02:46:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:49.612 02:46:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.612 02:46:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.612 02:46:40 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.612 02:46:40 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.612 02:46:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.612 ************************************ 00:05:49.612 START TEST rpc_daemon_integrity 00:05:49.612 ************************************ 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.612 { 00:05:49.612 "name": "Malloc2", 00:05:49.612 "aliases": [ 00:05:49.612 "7c2a8e76-0340-48ff-bcd1-b0b6320ba185" 00:05:49.612 ], 00:05:49.612 "product_name": "Malloc disk", 00:05:49.612 "block_size": 512, 00:05:49.612 "num_blocks": 16384, 00:05:49.612 "uuid": "7c2a8e76-0340-48ff-bcd1-b0b6320ba185", 00:05:49.612 "assigned_rate_limits": { 00:05:49.612 "rw_ios_per_sec": 0, 00:05:49.612 "rw_mbytes_per_sec": 0, 00:05:49.612 "r_mbytes_per_sec": 0, 00:05:49.612 "w_mbytes_per_sec": 0 00:05:49.612 }, 00:05:49.612 "claimed": false, 00:05:49.612 "zoned": false, 00:05:49.612 "supported_io_types": { 00:05:49.612 "read": true, 00:05:49.612 "write": true, 00:05:49.612 "unmap": true, 00:05:49.612 "write_zeroes": true, 00:05:49.612 "flush": true, 00:05:49.612 "reset": true, 00:05:49.612 "compare": false, 00:05:49.612 "compare_and_write": false, 00:05:49.612 "abort": true, 00:05:49.612 "nvme_admin": false, 00:05:49.612 "nvme_io": false 00:05:49.612 }, 00:05:49.612 "memory_domains": [ 00:05:49.612 { 00:05:49.612 "dma_device_id": "system", 00:05:49.612 "dma_device_type": 1 00:05:49.612 }, 00:05:49.612 { 00:05:49.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.612 "dma_device_type": 2 00:05:49.612 } 00:05:49.612 ], 00:05:49.612 "driver_specific": {} 00:05:49.612 } 00:05:49.612 ]' 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.612 [2024-05-13 02:46:40.385804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.612 [2024-05-13 02:46:40.385845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.612 [2024-05-13 02:46:40.385867] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19f71b0 00:05:49.612 [2024-05-13 02:46:40.385881] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.612 [2024-05-13 02:46:40.387241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.612 [2024-05-13 02:46:40.387270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.612 Passthru0 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.612 { 00:05:49.612 "name": "Malloc2", 00:05:49.612 "aliases": [ 00:05:49.612 "7c2a8e76-0340-48ff-bcd1-b0b6320ba185" 00:05:49.612 ], 00:05:49.612 "product_name": "Malloc disk", 00:05:49.612 "block_size": 512, 00:05:49.612 "num_blocks": 16384, 00:05:49.612 "uuid": "7c2a8e76-0340-48ff-bcd1-b0b6320ba185", 00:05:49.612 "assigned_rate_limits": { 00:05:49.612 "rw_ios_per_sec": 0, 00:05:49.612 "rw_mbytes_per_sec": 0, 00:05:49.612 "r_mbytes_per_sec": 0, 00:05:49.612 "w_mbytes_per_sec": 0 00:05:49.612 }, 00:05:49.612 "claimed": true, 00:05:49.612 "claim_type": "exclusive_write", 00:05:49.612 "zoned": false, 00:05:49.612 "supported_io_types": { 00:05:49.612 "read": true, 00:05:49.612 "write": true, 00:05:49.612 "unmap": true, 00:05:49.612 "write_zeroes": true, 00:05:49.612 "flush": true, 00:05:49.612 "reset": true, 00:05:49.612 "compare": false, 00:05:49.612 "compare_and_write": false, 00:05:49.612 "abort": true, 00:05:49.612 "nvme_admin": false, 00:05:49.612 "nvme_io": false 00:05:49.612 }, 00:05:49.612 "memory_domains": [ 00:05:49.612 { 00:05:49.612 "dma_device_id": "system", 00:05:49.612 "dma_device_type": 1 00:05:49.612 }, 00:05:49.612 { 00:05:49.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.612 "dma_device_type": 2 00:05:49.612 } 00:05:49.612 ], 00:05:49.612 "driver_specific": {} 00:05:49.612 }, 00:05:49.612 { 00:05:49.612 "name": "Passthru0", 00:05:49.612 "aliases": [ 00:05:49.612 "a52c527c-30c9-520a-b877-80aea69ca65c" 00:05:49.612 ], 00:05:49.612 "product_name": "passthru", 00:05:49.612 "block_size": 512, 00:05:49.612 "num_blocks": 16384, 00:05:49.612 "uuid": "a52c527c-30c9-520a-b877-80aea69ca65c", 00:05:49.612 "assigned_rate_limits": { 00:05:49.612 "rw_ios_per_sec": 0, 00:05:49.612 "rw_mbytes_per_sec": 0, 00:05:49.612 "r_mbytes_per_sec": 0, 00:05:49.612 "w_mbytes_per_sec": 0 00:05:49.612 }, 00:05:49.612 "claimed": false, 00:05:49.612 "zoned": false, 00:05:49.612 "supported_io_types": { 00:05:49.612 "read": true, 00:05:49.612 "write": true, 00:05:49.612 "unmap": true, 00:05:49.612 "write_zeroes": true, 00:05:49.612 "flush": true, 00:05:49.612 "reset": true, 00:05:49.612 "compare": false, 00:05:49.612 "compare_and_write": false, 00:05:49.612 "abort": true, 00:05:49.612 "nvme_admin": false, 00:05:49.612 "nvme_io": false 00:05:49.612 }, 00:05:49.612 "memory_domains": [ 00:05:49.612 { 00:05:49.612 "dma_device_id": "system", 00:05:49.612 "dma_device_type": 1 00:05:49.612 }, 00:05:49.612 { 00:05:49.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.612 "dma_device_type": 2 00:05:49.612 } 00:05:49.612 ], 00:05:49.612 "driver_specific": { 00:05:49.612 "passthru": { 00:05:49.612 "name": "Passthru0", 00:05:49.612 "base_bdev_name": "Malloc2" 00:05:49.612 } 00:05:49.612 } 00:05:49.612 } 00:05:49.612 ]' 00:05:49.612 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.871 00:05:49.871 real 0m0.221s 00:05:49.871 user 0m0.143s 00:05:49.871 sys 0m0.025s 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.871 02:46:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.871 ************************************ 00:05:49.871 END TEST rpc_daemon_integrity 00:05:49.871 ************************************ 00:05:49.871 02:46:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:49.871 02:46:40 rpc -- rpc/rpc.sh@84 -- # killprocess 222158 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@946 -- # '[' -z 222158 ']' 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@950 -- # kill -0 222158 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@951 -- # uname 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 222158 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 222158' 00:05:49.871 killing process with pid 222158 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@965 -- # kill 222158 00:05:49.871 02:46:40 rpc -- common/autotest_common.sh@970 -- # wait 222158 00:05:50.437 00:05:50.437 real 0m1.898s 00:05:50.437 user 0m2.400s 00:05:50.437 sys 0m0.586s 00:05:50.437 02:46:40 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.437 02:46:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.437 ************************************ 00:05:50.437 END TEST rpc 00:05:50.437 ************************************ 00:05:50.437 02:46:40 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:50.437 02:46:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.437 02:46:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.437 02:46:40 -- common/autotest_common.sh@10 -- # set +x 00:05:50.437 ************************************ 00:05:50.437 START TEST skip_rpc 00:05:50.437 ************************************ 00:05:50.437 02:46:41 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:50.437 * Looking for test storage... 00:05:50.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:50.437 02:46:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.437 02:46:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:50.437 02:46:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:50.437 02:46:41 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.437 02:46:41 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.437 02:46:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.437 ************************************ 00:05:50.437 START TEST skip_rpc 00:05:50.437 ************************************ 00:05:50.437 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:50.437 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=222598 00:05:50.437 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:50.437 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.438 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:50.438 [2024-05-13 02:46:41.148608] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:50.438 [2024-05-13 02:46:41.148672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222598 ] 00:05:50.438 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.438 [2024-05-13 02:46:41.178514] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.438 [2024-05-13 02:46:41.205860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.696 [2024-05-13 02:46:41.296552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 222598 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 222598 ']' 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 222598 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 222598 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 222598' 00:05:55.958 killing process with pid 222598 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 222598 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 222598 00:05:55.958 00:05:55.958 real 0m5.456s 00:05:55.958 user 0m5.144s 00:05:55.958 sys 0m0.310s 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.958 02:46:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.958 ************************************ 00:05:55.958 END TEST skip_rpc 00:05:55.958 ************************************ 00:05:55.958 02:46:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:55.958 02:46:46 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.958 02:46:46 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.958 02:46:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.958 ************************************ 00:05:55.958 START TEST skip_rpc_with_json 00:05:55.958 ************************************ 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=223292 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 223292 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 223292 ']' 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.958 02:46:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.958 [2024-05-13 02:46:46.660504] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:55.958 [2024-05-13 02:46:46.660594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223292 ] 00:05:55.958 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.958 [2024-05-13 02:46:46.693447] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.958 [2024-05-13 02:46:46.723829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.217 [2024-05-13 02:46:46.819505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.475 [2024-05-13 02:46:47.081967] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:56.475 request: 00:05:56.475 { 00:05:56.475 "trtype": "tcp", 00:05:56.475 "method": "nvmf_get_transports", 00:05:56.475 "req_id": 1 00:05:56.475 } 00:05:56.475 Got JSON-RPC error response 00:05:56.475 response: 00:05:56.475 { 00:05:56.475 "code": -19, 00:05:56.475 "message": "No such device" 00:05:56.475 } 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.475 [2024-05-13 02:46:47.090099] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.475 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.475 { 00:05:56.475 "subsystems": [ 00:05:56.475 { 00:05:56.475 "subsystem": "vfio_user_target", 00:05:56.475 "config": null 00:05:56.475 }, 00:05:56.475 { 00:05:56.475 "subsystem": "keyring", 00:05:56.475 "config": [] 00:05:56.475 }, 00:05:56.475 { 00:05:56.475 "subsystem": "iobuf", 00:05:56.475 "config": [ 00:05:56.475 { 00:05:56.475 "method": "iobuf_set_options", 00:05:56.475 "params": { 00:05:56.475 "small_pool_count": 8192, 00:05:56.475 "large_pool_count": 1024, 00:05:56.475 "small_bufsize": 8192, 00:05:56.475 "large_bufsize": 135168 00:05:56.475 } 00:05:56.475 } 00:05:56.475 ] 00:05:56.475 }, 00:05:56.475 { 00:05:56.475 "subsystem": "sock", 00:05:56.475 "config": [ 00:05:56.475 { 00:05:56.476 "method": "sock_impl_set_options", 00:05:56.476 "params": { 00:05:56.476 "impl_name": "posix", 00:05:56.476 "recv_buf_size": 2097152, 00:05:56.476 "send_buf_size": 2097152, 00:05:56.476 "enable_recv_pipe": true, 00:05:56.476 "enable_quickack": false, 00:05:56.476 "enable_placement_id": 0, 00:05:56.476 "enable_zerocopy_send_server": true, 00:05:56.476 "enable_zerocopy_send_client": false, 00:05:56.476 "zerocopy_threshold": 0, 00:05:56.476 "tls_version": 0, 00:05:56.476 "enable_ktls": false 00:05:56.476 } 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "method": "sock_impl_set_options", 00:05:56.476 "params": { 00:05:56.476 "impl_name": "ssl", 00:05:56.476 "recv_buf_size": 4096, 00:05:56.476 "send_buf_size": 4096, 00:05:56.476 "enable_recv_pipe": true, 00:05:56.476 "enable_quickack": false, 00:05:56.476 "enable_placement_id": 0, 00:05:56.476 "enable_zerocopy_send_server": true, 00:05:56.476 "enable_zerocopy_send_client": false, 00:05:56.476 "zerocopy_threshold": 0, 00:05:56.476 "tls_version": 0, 00:05:56.476 "enable_ktls": false 00:05:56.476 } 00:05:56.476 } 00:05:56.476 ] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "vmd", 00:05:56.476 "config": [] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "accel", 00:05:56.476 "config": [ 00:05:56.476 { 00:05:56.476 "method": "accel_set_options", 00:05:56.476 "params": { 00:05:56.476 "small_cache_size": 128, 00:05:56.476 "large_cache_size": 16, 00:05:56.476 "task_count": 2048, 00:05:56.476 "sequence_count": 2048, 00:05:56.476 "buf_count": 2048 00:05:56.476 } 00:05:56.476 } 00:05:56.476 ] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "bdev", 00:05:56.476 "config": [ 00:05:56.476 { 00:05:56.476 "method": "bdev_set_options", 00:05:56.476 "params": { 00:05:56.476 "bdev_io_pool_size": 65535, 00:05:56.476 "bdev_io_cache_size": 256, 00:05:56.476 "bdev_auto_examine": true, 00:05:56.476 "iobuf_small_cache_size": 128, 00:05:56.476 "iobuf_large_cache_size": 16 00:05:56.476 } 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "method": "bdev_raid_set_options", 00:05:56.476 "params": { 00:05:56.476 "process_window_size_kb": 1024 00:05:56.476 } 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "method": "bdev_iscsi_set_options", 00:05:56.476 "params": { 00:05:56.476 "timeout_sec": 30 00:05:56.476 } 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "method": "bdev_nvme_set_options", 00:05:56.476 "params": { 00:05:56.476 "action_on_timeout": "none", 00:05:56.476 "timeout_us": 0, 00:05:56.476 "timeout_admin_us": 0, 00:05:56.476 "keep_alive_timeout_ms": 10000, 00:05:56.476 "arbitration_burst": 0, 00:05:56.476 "low_priority_weight": 0, 00:05:56.476 "medium_priority_weight": 0, 00:05:56.476 "high_priority_weight": 0, 00:05:56.476 "nvme_adminq_poll_period_us": 10000, 00:05:56.476 "nvme_ioq_poll_period_us": 0, 00:05:56.476 "io_queue_requests": 0, 00:05:56.476 "delay_cmd_submit": true, 00:05:56.476 "transport_retry_count": 4, 00:05:56.476 "bdev_retry_count": 3, 00:05:56.476 "transport_ack_timeout": 0, 00:05:56.476 "ctrlr_loss_timeout_sec": 0, 00:05:56.476 "reconnect_delay_sec": 0, 00:05:56.476 "fast_io_fail_timeout_sec": 0, 00:05:56.476 "disable_auto_failback": false, 00:05:56.476 "generate_uuids": false, 00:05:56.476 "transport_tos": 0, 00:05:56.476 "nvme_error_stat": false, 00:05:56.476 "rdma_srq_size": 0, 00:05:56.476 "io_path_stat": false, 00:05:56.476 "allow_accel_sequence": false, 00:05:56.476 "rdma_max_cq_size": 0, 00:05:56.476 "rdma_cm_event_timeout_ms": 0, 00:05:56.476 "dhchap_digests": [ 00:05:56.476 "sha256", 00:05:56.476 "sha384", 00:05:56.476 "sha512" 00:05:56.476 ], 00:05:56.476 "dhchap_dhgroups": [ 00:05:56.476 "null", 00:05:56.476 "ffdhe2048", 00:05:56.476 "ffdhe3072", 00:05:56.476 "ffdhe4096", 00:05:56.476 "ffdhe6144", 00:05:56.476 "ffdhe8192" 00:05:56.476 ] 00:05:56.476 } 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "method": "bdev_nvme_set_hotplug", 00:05:56.476 "params": { 00:05:56.476 "period_us": 100000, 00:05:56.476 "enable": false 00:05:56.476 } 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "method": "bdev_wait_for_examine" 00:05:56.476 } 00:05:56.476 ] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "scsi", 00:05:56.476 "config": null 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "scheduler", 00:05:56.476 "config": [ 00:05:56.476 { 00:05:56.476 "method": "framework_set_scheduler", 00:05:56.476 "params": { 00:05:56.476 "name": "static" 00:05:56.476 } 00:05:56.476 } 00:05:56.476 ] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "vhost_scsi", 00:05:56.476 "config": [] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "vhost_blk", 00:05:56.476 "config": [] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "ublk", 00:05:56.476 "config": [] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "nbd", 00:05:56.476 "config": [] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "nvmf", 00:05:56.476 "config": [ 00:05:56.476 { 00:05:56.476 "method": "nvmf_set_config", 00:05:56.476 "params": { 00:05:56.476 "discovery_filter": "match_any", 00:05:56.476 "admin_cmd_passthru": { 00:05:56.476 "identify_ctrlr": false 00:05:56.476 } 00:05:56.476 } 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "method": "nvmf_set_max_subsystems", 00:05:56.476 "params": { 00:05:56.476 "max_subsystems": 1024 00:05:56.476 } 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "method": "nvmf_set_crdt", 00:05:56.476 "params": { 00:05:56.476 "crdt1": 0, 00:05:56.476 "crdt2": 0, 00:05:56.476 "crdt3": 0 00:05:56.476 } 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "method": "nvmf_create_transport", 00:05:56.476 "params": { 00:05:56.476 "trtype": "TCP", 00:05:56.476 "max_queue_depth": 128, 00:05:56.476 "max_io_qpairs_per_ctrlr": 127, 00:05:56.476 "in_capsule_data_size": 4096, 00:05:56.476 "max_io_size": 131072, 00:05:56.476 "io_unit_size": 131072, 00:05:56.476 "max_aq_depth": 128, 00:05:56.476 "num_shared_buffers": 511, 00:05:56.476 "buf_cache_size": 4294967295, 00:05:56.476 "dif_insert_or_strip": false, 00:05:56.476 "zcopy": false, 00:05:56.476 "c2h_success": true, 00:05:56.476 "sock_priority": 0, 00:05:56.476 "abort_timeout_sec": 1, 00:05:56.476 "ack_timeout": 0, 00:05:56.476 "data_wr_pool_size": 0 00:05:56.476 } 00:05:56.476 } 00:05:56.476 ] 00:05:56.476 }, 00:05:56.476 { 00:05:56.476 "subsystem": "iscsi", 00:05:56.476 "config": [ 00:05:56.476 { 00:05:56.476 "method": "iscsi_set_options", 00:05:56.476 "params": { 00:05:56.476 "node_base": "iqn.2016-06.io.spdk", 00:05:56.476 "max_sessions": 128, 00:05:56.476 "max_connections_per_session": 2, 00:05:56.476 "max_queue_depth": 64, 00:05:56.476 "default_time2wait": 2, 00:05:56.476 "default_time2retain": 20, 00:05:56.476 "first_burst_length": 8192, 00:05:56.476 "immediate_data": true, 00:05:56.476 "allow_duplicated_isid": false, 00:05:56.476 "error_recovery_level": 0, 00:05:56.476 "nop_timeout": 60, 00:05:56.476 "nop_in_interval": 30, 00:05:56.476 "disable_chap": false, 00:05:56.476 "require_chap": false, 00:05:56.476 "mutual_chap": false, 00:05:56.476 "chap_group": 0, 00:05:56.476 "max_large_datain_per_connection": 64, 00:05:56.476 "max_r2t_per_connection": 4, 00:05:56.476 "pdu_pool_size": 36864, 00:05:56.476 "immediate_data_pool_size": 16384, 00:05:56.476 "data_out_pool_size": 2048 00:05:56.476 } 00:05:56.476 } 00:05:56.476 ] 00:05:56.476 } 00:05:56.476 ] 00:05:56.476 } 00:05:56.476 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:56.476 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 223292 00:05:56.476 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 223292 ']' 00:05:56.476 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 223292 00:05:56.476 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:56.476 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.476 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 223292 00:05:56.476 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.477 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.477 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 223292' 00:05:56.477 killing process with pid 223292 00:05:56.477 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 223292 00:05:56.477 02:46:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 223292 00:05:57.043 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=223432 00:05:57.043 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:57.043 02:46:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 223432 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 223432 ']' 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 223432 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 223432 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 223432' 00:06:02.333 killing process with pid 223432 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 223432 00:06:02.333 02:46:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 223432 00:06:02.333 02:46:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.333 02:46:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.333 00:06:02.333 real 0m6.523s 00:06:02.333 user 0m6.138s 00:06:02.333 sys 0m0.704s 00:06:02.333 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.333 02:46:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.333 ************************************ 00:06:02.333 END TEST skip_rpc_with_json 00:06:02.333 ************************************ 00:06:02.593 02:46:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:02.593 02:46:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.593 02:46:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.593 02:46:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.593 ************************************ 00:06:02.593 START TEST skip_rpc_with_delay 00:06:02.593 ************************************ 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.593 [2024-05-13 02:46:53.232771] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:02.593 [2024-05-13 02:46:53.232879] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.593 00:06:02.593 real 0m0.063s 00:06:02.593 user 0m0.040s 00:06:02.593 sys 0m0.022s 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.593 02:46:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:02.593 ************************************ 00:06:02.593 END TEST skip_rpc_with_delay 00:06:02.593 ************************************ 00:06:02.593 02:46:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:02.593 02:46:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:02.593 02:46:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:02.593 02:46:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.593 02:46:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.593 02:46:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.593 ************************************ 00:06:02.593 START TEST exit_on_failed_rpc_init 00:06:02.593 ************************************ 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=224146 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 224146 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 224146 ']' 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.593 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.593 [2024-05-13 02:46:53.347751] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:02.593 [2024-05-13 02:46:53.347843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224146 ] 00:06:02.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.593 [2024-05-13 02:46:53.380110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.851 [2024-05-13 02:46:53.406972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.851 [2024-05-13 02:46:53.496259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.117 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.117 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:03.117 02:46:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:03.118 02:46:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.118 [2024-05-13 02:46:53.800147] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:03.118 [2024-05-13 02:46:53.800234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224162 ] 00:06:03.118 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.118 [2024-05-13 02:46:53.833427] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.118 [2024-05-13 02:46:53.863573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.376 [2024-05-13 02:46:53.957574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.376 [2024-05-13 02:46:53.957706] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:03.376 [2024-05-13 02:46:53.957729] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:03.376 [2024-05-13 02:46:53.957747] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 224146 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 224146 ']' 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 224146 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 224146 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 224146' 00:06:03.376 killing process with pid 224146 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 224146 00:06:03.376 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 224146 00:06:03.939 00:06:03.939 real 0m1.196s 00:06:03.939 user 0m1.304s 00:06:03.939 sys 0m0.452s 00:06:03.939 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.939 02:46:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.939 ************************************ 00:06:03.939 END TEST exit_on_failed_rpc_init 00:06:03.939 ************************************ 00:06:03.939 02:46:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.939 00:06:03.939 real 0m13.507s 00:06:03.939 user 0m12.730s 00:06:03.939 sys 0m1.663s 00:06:03.939 02:46:54 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.939 02:46:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.939 ************************************ 00:06:03.939 END TEST skip_rpc 00:06:03.939 ************************************ 00:06:03.939 02:46:54 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:03.939 02:46:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.939 02:46:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.939 02:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.939 ************************************ 00:06:03.939 START TEST rpc_client 00:06:03.939 ************************************ 00:06:03.939 02:46:54 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:03.939 * Looking for test storage... 00:06:03.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:03.939 02:46:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:03.939 OK 00:06:03.939 02:46:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:03.939 00:06:03.939 real 0m0.065s 00:06:03.939 user 0m0.035s 00:06:03.939 sys 0m0.036s 00:06:03.939 02:46:54 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.939 02:46:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:03.939 ************************************ 00:06:03.939 END TEST rpc_client 00:06:03.939 ************************************ 00:06:03.939 02:46:54 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:03.939 02:46:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.939 02:46:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.939 02:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.939 ************************************ 00:06:03.939 START TEST json_config 00:06:03.939 ************************************ 00:06:03.939 02:46:54 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:03.939 02:46:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.939 02:46:54 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.939 02:46:54 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.939 02:46:54 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.939 02:46:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.939 02:46:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.939 02:46:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.939 02:46:54 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.939 02:46:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@47 -- # : 0 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.939 02:46:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.940 02:46:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.940 02:46:54 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:03.940 02:46:54 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:03.940 02:46:54 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:03.940 02:46:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:04.197 02:46:54 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:04.197 02:46:54 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:04.197 INFO: JSON configuration test init 00:06:04.197 02:46:54 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:04.197 02:46:54 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.197 02:46:54 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.197 02:46:54 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:04.197 02:46:54 json_config -- json_config/common.sh@9 -- # local app=target 00:06:04.197 02:46:54 json_config -- json_config/common.sh@10 -- # shift 00:06:04.197 02:46:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.197 02:46:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.197 02:46:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.197 02:46:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.197 02:46:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.197 02:46:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=224400 00:06:04.197 02:46:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:04.197 02:46:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.197 Waiting for target to run... 00:06:04.197 02:46:54 json_config -- json_config/common.sh@25 -- # waitforlisten 224400 /var/tmp/spdk_tgt.sock 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@827 -- # '[' -z 224400 ']' 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.197 02:46:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.197 [2024-05-13 02:46:54.794566] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:04.197 [2024-05-13 02:46:54.794661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224400 ] 00:06:04.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.764 [2024-05-13 02:46:55.281942] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.764 [2024-05-13 02:46:55.316368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.764 [2024-05-13 02:46:55.398127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.021 02:46:55 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.021 02:46:55 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:05.021 02:46:55 json_config -- json_config/common.sh@26 -- # echo '' 00:06:05.021 00:06:05.021 02:46:55 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:05.021 02:46:55 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:05.021 02:46:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:05.021 02:46:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.021 02:46:55 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:05.021 02:46:55 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:05.021 02:46:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.021 02:46:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.021 02:46:55 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:05.021 02:46:55 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:05.021 02:46:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:08.300 02:46:58 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:08.300 02:46:58 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:08.300 02:46:58 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:08.300 02:46:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.300 02:46:58 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:08.300 02:46:58 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:08.300 02:46:58 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:08.300 02:46:58 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:08.300 02:46:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:08.300 02:46:58 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:08.558 02:46:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.558 02:46:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:08.558 02:46:59 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:08.558 02:46:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:08.558 02:46:59 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.558 02:46:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.817 MallocForNvmf0 00:06:08.817 02:46:59 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.817 02:46:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:09.075 MallocForNvmf1 00:06:09.075 02:46:59 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.075 02:46:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.333 [2024-05-13 02:46:59.914754] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.333 02:46:59 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.333 02:46:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.591 02:47:00 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.591 02:47:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.848 02:47:00 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.848 02:47:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:10.106 02:47:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:10.106 02:47:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:10.364 [2024-05-13 02:47:00.925569] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:10.364 [2024-05-13 02:47:00.926098] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:10.364 02:47:00 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:10.364 02:47:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.364 02:47:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.364 02:47:00 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:10.364 02:47:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.364 02:47:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.364 02:47:00 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:10.364 02:47:00 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.364 02:47:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.622 MallocBdevForConfigChangeCheck 00:06:10.622 02:47:01 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:10.622 02:47:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.622 02:47:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.622 02:47:01 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:10.622 02:47:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.880 02:47:01 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:10.880 INFO: shutting down applications... 00:06:10.880 02:47:01 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:10.880 02:47:01 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:10.880 02:47:01 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:10.880 02:47:01 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:12.779 Calling clear_iscsi_subsystem 00:06:12.779 Calling clear_nvmf_subsystem 00:06:12.779 Calling clear_nbd_subsystem 00:06:12.779 Calling clear_ublk_subsystem 00:06:12.779 Calling clear_vhost_blk_subsystem 00:06:12.779 Calling clear_vhost_scsi_subsystem 00:06:12.779 Calling clear_bdev_subsystem 00:06:12.779 02:47:03 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:12.779 02:47:03 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:12.779 02:47:03 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:12.779 02:47:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.779 02:47:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:12.779 02:47:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:13.050 02:47:03 json_config -- json_config/json_config.sh@345 -- # break 00:06:13.050 02:47:03 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:13.050 02:47:03 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:13.050 02:47:03 json_config -- json_config/common.sh@31 -- # local app=target 00:06:13.050 02:47:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:13.050 02:47:03 json_config -- json_config/common.sh@35 -- # [[ -n 224400 ]] 00:06:13.050 02:47:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 224400 00:06:13.050 [2024-05-13 02:47:03.652650] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:13.050 02:47:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:13.050 02:47:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.050 02:47:03 json_config -- json_config/common.sh@41 -- # kill -0 224400 00:06:13.050 02:47:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.620 02:47:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.620 02:47:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.620 02:47:04 json_config -- json_config/common.sh@41 -- # kill -0 224400 00:06:13.620 02:47:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.620 02:47:04 json_config -- json_config/common.sh@43 -- # break 00:06:13.620 02:47:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.620 02:47:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.620 SPDK target shutdown done 00:06:13.620 02:47:04 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:13.620 INFO: relaunching applications... 00:06:13.620 02:47:04 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.620 02:47:04 json_config -- json_config/common.sh@9 -- # local app=target 00:06:13.620 02:47:04 json_config -- json_config/common.sh@10 -- # shift 00:06:13.620 02:47:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.620 02:47:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.620 02:47:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.620 02:47:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.620 02:47:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.620 02:47:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=225709 00:06:13.620 02:47:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.620 02:47:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.621 Waiting for target to run... 00:06:13.621 02:47:04 json_config -- json_config/common.sh@25 -- # waitforlisten 225709 /var/tmp/spdk_tgt.sock 00:06:13.621 02:47:04 json_config -- common/autotest_common.sh@827 -- # '[' -z 225709 ']' 00:06:13.621 02:47:04 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.621 02:47:04 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.621 02:47:04 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.621 02:47:04 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.621 02:47:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.621 [2024-05-13 02:47:04.210208] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:13.621 [2024-05-13 02:47:04.210313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225709 ] 00:06:13.621 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.189 [2024-05-13 02:47:04.707193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.189 [2024-05-13 02:47:04.741278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.189 [2024-05-13 02:47:04.815610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.472 [2024-05-13 02:47:07.835793] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.472 [2024-05-13 02:47:07.867759] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:17.472 [2024-05-13 02:47:07.868250] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:18.040 02:47:08 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.040 02:47:08 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:18.040 02:47:08 json_config -- json_config/common.sh@26 -- # echo '' 00:06:18.040 00:06:18.040 02:47:08 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:18.040 02:47:08 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:18.040 INFO: Checking if target configuration is the same... 00:06:18.040 02:47:08 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.040 02:47:08 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:18.040 02:47:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.040 + '[' 2 -ne 2 ']' 00:06:18.040 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:18.040 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:18.040 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.040 +++ basename /dev/fd/62 00:06:18.040 ++ mktemp /tmp/62.XXX 00:06:18.040 + tmp_file_1=/tmp/62.eAk 00:06:18.040 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.040 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:18.040 + tmp_file_2=/tmp/spdk_tgt_config.json.rOi 00:06:18.040 + ret=0 00:06:18.040 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.336 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.336 + diff -u /tmp/62.eAk /tmp/spdk_tgt_config.json.rOi 00:06:18.336 + echo 'INFO: JSON config files are the same' 00:06:18.336 INFO: JSON config files are the same 00:06:18.336 + rm /tmp/62.eAk /tmp/spdk_tgt_config.json.rOi 00:06:18.336 + exit 0 00:06:18.336 02:47:09 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:18.336 02:47:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:18.336 INFO: changing configuration and checking if this can be detected... 00:06:18.336 02:47:09 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.336 02:47:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.594 02:47:09 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.594 02:47:09 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:18.594 02:47:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.594 + '[' 2 -ne 2 ']' 00:06:18.594 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:18.594 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:18.594 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.594 +++ basename /dev/fd/62 00:06:18.594 ++ mktemp /tmp/62.XXX 00:06:18.594 + tmp_file_1=/tmp/62.Kum 00:06:18.594 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.594 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:18.594 + tmp_file_2=/tmp/spdk_tgt_config.json.VXv 00:06:18.594 + ret=0 00:06:18.594 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:19.161 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:19.161 + diff -u /tmp/62.Kum /tmp/spdk_tgt_config.json.VXv 00:06:19.161 + ret=1 00:06:19.161 + echo '=== Start of file: /tmp/62.Kum ===' 00:06:19.161 + cat /tmp/62.Kum 00:06:19.161 + echo '=== End of file: /tmp/62.Kum ===' 00:06:19.161 + echo '' 00:06:19.161 + echo '=== Start of file: /tmp/spdk_tgt_config.json.VXv ===' 00:06:19.161 + cat /tmp/spdk_tgt_config.json.VXv 00:06:19.161 + echo '=== End of file: /tmp/spdk_tgt_config.json.VXv ===' 00:06:19.161 + echo '' 00:06:19.161 + rm /tmp/62.Kum /tmp/spdk_tgt_config.json.VXv 00:06:19.161 + exit 1 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:19.161 INFO: configuration change detected. 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:19.161 02:47:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:19.161 02:47:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@317 -- # [[ -n 225709 ]] 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:19.161 02:47:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:19.161 02:47:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:19.161 02:47:09 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:19.162 02:47:09 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.162 02:47:09 json_config -- json_config/json_config.sh@323 -- # killprocess 225709 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@946 -- # '[' -z 225709 ']' 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@950 -- # kill -0 225709 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@951 -- # uname 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 225709 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 225709' 00:06:19.162 killing process with pid 225709 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@965 -- # kill 225709 00:06:19.162 [2024-05-13 02:47:09.797878] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:19.162 02:47:09 json_config -- common/autotest_common.sh@970 -- # wait 225709 00:06:21.063 02:47:11 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.063 02:47:11 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:21.063 02:47:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.063 02:47:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.063 02:47:11 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:21.063 02:47:11 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:21.063 INFO: Success 00:06:21.063 00:06:21.063 real 0m16.747s 00:06:21.063 user 0m18.511s 00:06:21.063 sys 0m2.234s 00:06:21.063 02:47:11 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.063 02:47:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.063 ************************************ 00:06:21.063 END TEST json_config 00:06:21.063 ************************************ 00:06:21.063 02:47:11 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:21.063 02:47:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.063 02:47:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.063 02:47:11 -- common/autotest_common.sh@10 -- # set +x 00:06:21.063 ************************************ 00:06:21.063 START TEST json_config_extra_key 00:06:21.063 ************************************ 00:06:21.063 02:47:11 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:21.063 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.063 02:47:11 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.063 02:47:11 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.063 02:47:11 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.063 02:47:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.063 02:47:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.063 02:47:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.063 02:47:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:21.063 02:47:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.063 02:47:11 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.064 02:47:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.064 02:47:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.064 02:47:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.064 02:47:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.064 02:47:11 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.064 02:47:11 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:21.064 INFO: launching applications... 00:06:21.064 02:47:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=226642 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:21.064 Waiting for target to run... 00:06:21.064 02:47:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 226642 /var/tmp/spdk_tgt.sock 00:06:21.064 02:47:11 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 226642 ']' 00:06:21.064 02:47:11 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.064 02:47:11 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.064 02:47:11 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.064 02:47:11 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.064 02:47:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.064 [2024-05-13 02:47:11.586011] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:21.064 [2024-05-13 02:47:11.586106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226642 ] 00:06:21.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.324 [2024-05-13 02:47:11.893913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.324 [2024-05-13 02:47:11.927248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.324 [2024-05-13 02:47:11.990909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.892 02:47:12 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.892 02:47:12 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:21.892 02:47:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:21.892 00:06:21.892 02:47:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:21.892 INFO: shutting down applications... 00:06:21.892 02:47:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:21.892 02:47:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:21.892 02:47:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.892 02:47:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 226642 ]] 00:06:21.892 02:47:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 226642 00:06:21.892 02:47:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.893 02:47:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.893 02:47:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 226642 00:06:21.893 02:47:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.461 02:47:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.461 02:47:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.461 02:47:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 226642 00:06:22.461 02:47:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.461 02:47:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:22.461 02:47:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.461 02:47:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.461 SPDK target shutdown done 00:06:22.461 02:47:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:22.461 Success 00:06:22.461 00:06:22.461 real 0m1.575s 00:06:22.461 user 0m1.588s 00:06:22.461 sys 0m0.415s 00:06:22.461 02:47:13 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.461 02:47:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:22.461 ************************************ 00:06:22.461 END TEST json_config_extra_key 00:06:22.461 ************************************ 00:06:22.461 02:47:13 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.461 02:47:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:22.461 02:47:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.461 02:47:13 -- common/autotest_common.sh@10 -- # set +x 00:06:22.461 ************************************ 00:06:22.461 START TEST alias_rpc 00:06:22.462 ************************************ 00:06:22.462 02:47:13 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.462 * Looking for test storage... 00:06:22.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:22.462 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.462 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=226942 00:06:22.462 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.462 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 226942 00:06:22.462 02:47:13 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 226942 ']' 00:06:22.462 02:47:13 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.462 02:47:13 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.462 02:47:13 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.462 02:47:13 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.462 02:47:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.462 [2024-05-13 02:47:13.215764] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:22.462 [2024-05-13 02:47:13.215863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226942 ] 00:06:22.462 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.462 [2024-05-13 02:47:13.247452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.720 [2024-05-13 02:47:13.274879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.720 [2024-05-13 02:47:13.358725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.979 02:47:13 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.979 02:47:13 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:22.979 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:23.238 02:47:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 226942 00:06:23.238 02:47:13 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 226942 ']' 00:06:23.238 02:47:13 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 226942 00:06:23.238 02:47:13 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:23.238 02:47:13 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.238 02:47:13 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 226942 00:06:23.238 02:47:13 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.238 02:47:13 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.239 02:47:13 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 226942' 00:06:23.239 killing process with pid 226942 00:06:23.239 02:47:13 alias_rpc -- common/autotest_common.sh@965 -- # kill 226942 00:06:23.239 02:47:13 alias_rpc -- common/autotest_common.sh@970 -- # wait 226942 00:06:23.807 00:06:23.807 real 0m1.190s 00:06:23.807 user 0m1.265s 00:06:23.807 sys 0m0.415s 00:06:23.807 02:47:14 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.807 02:47:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.807 ************************************ 00:06:23.807 END TEST alias_rpc 00:06:23.807 ************************************ 00:06:23.807 02:47:14 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:23.807 02:47:14 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.807 02:47:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.807 02:47:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.807 02:47:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.807 ************************************ 00:06:23.807 START TEST spdkcli_tcp 00:06:23.807 ************************************ 00:06:23.807 02:47:14 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.807 * Looking for test storage... 00:06:23.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:23.807 02:47:14 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:23.807 02:47:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=227127 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:23.807 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 227127 00:06:23.807 02:47:14 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 227127 ']' 00:06:23.807 02:47:14 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.807 02:47:14 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.807 02:47:14 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.807 02:47:14 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.807 02:47:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.807 [2024-05-13 02:47:14.460734] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:23.807 [2024-05-13 02:47:14.460818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227127 ] 00:06:23.807 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.807 [2024-05-13 02:47:14.491645] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.807 [2024-05-13 02:47:14.518417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.807 [2024-05-13 02:47:14.604448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.807 [2024-05-13 02:47:14.604452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.065 02:47:14 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.065 02:47:14 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:24.065 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=227148 00:06:24.065 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:24.065 02:47:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:24.324 [ 00:06:24.324 "bdev_malloc_delete", 00:06:24.324 "bdev_malloc_create", 00:06:24.324 "bdev_null_resize", 00:06:24.324 "bdev_null_delete", 00:06:24.324 "bdev_null_create", 00:06:24.324 "bdev_nvme_cuse_unregister", 00:06:24.324 "bdev_nvme_cuse_register", 00:06:24.324 "bdev_opal_new_user", 00:06:24.324 "bdev_opal_set_lock_state", 00:06:24.324 "bdev_opal_delete", 00:06:24.324 "bdev_opal_get_info", 00:06:24.324 "bdev_opal_create", 00:06:24.324 "bdev_nvme_opal_revert", 00:06:24.324 "bdev_nvme_opal_init", 00:06:24.324 "bdev_nvme_send_cmd", 00:06:24.324 "bdev_nvme_get_path_iostat", 00:06:24.324 "bdev_nvme_get_mdns_discovery_info", 00:06:24.324 "bdev_nvme_stop_mdns_discovery", 00:06:24.324 "bdev_nvme_start_mdns_discovery", 00:06:24.324 "bdev_nvme_set_multipath_policy", 00:06:24.324 "bdev_nvme_set_preferred_path", 00:06:24.324 "bdev_nvme_get_io_paths", 00:06:24.324 "bdev_nvme_remove_error_injection", 00:06:24.324 "bdev_nvme_add_error_injection", 00:06:24.324 "bdev_nvme_get_discovery_info", 00:06:24.324 "bdev_nvme_stop_discovery", 00:06:24.324 "bdev_nvme_start_discovery", 00:06:24.324 "bdev_nvme_get_controller_health_info", 00:06:24.324 "bdev_nvme_disable_controller", 00:06:24.324 "bdev_nvme_enable_controller", 00:06:24.324 "bdev_nvme_reset_controller", 00:06:24.324 "bdev_nvme_get_transport_statistics", 00:06:24.324 "bdev_nvme_apply_firmware", 00:06:24.324 "bdev_nvme_detach_controller", 00:06:24.324 "bdev_nvme_get_controllers", 00:06:24.324 "bdev_nvme_attach_controller", 00:06:24.324 "bdev_nvme_set_hotplug", 00:06:24.324 "bdev_nvme_set_options", 00:06:24.324 "bdev_passthru_delete", 00:06:24.324 "bdev_passthru_create", 00:06:24.324 "bdev_lvol_grow_lvstore", 00:06:24.324 "bdev_lvol_get_lvols", 00:06:24.324 "bdev_lvol_get_lvstores", 00:06:24.324 "bdev_lvol_delete", 00:06:24.324 "bdev_lvol_set_read_only", 00:06:24.324 "bdev_lvol_resize", 00:06:24.324 "bdev_lvol_decouple_parent", 00:06:24.324 "bdev_lvol_inflate", 00:06:24.324 "bdev_lvol_rename", 00:06:24.324 "bdev_lvol_clone_bdev", 00:06:24.324 "bdev_lvol_clone", 00:06:24.324 "bdev_lvol_snapshot", 00:06:24.324 "bdev_lvol_create", 00:06:24.324 "bdev_lvol_delete_lvstore", 00:06:24.324 "bdev_lvol_rename_lvstore", 00:06:24.324 "bdev_lvol_create_lvstore", 00:06:24.324 "bdev_raid_set_options", 00:06:24.324 "bdev_raid_remove_base_bdev", 00:06:24.324 "bdev_raid_add_base_bdev", 00:06:24.324 "bdev_raid_delete", 00:06:24.324 "bdev_raid_create", 00:06:24.324 "bdev_raid_get_bdevs", 00:06:24.324 "bdev_error_inject_error", 00:06:24.324 "bdev_error_delete", 00:06:24.324 "bdev_error_create", 00:06:24.324 "bdev_split_delete", 00:06:24.324 "bdev_split_create", 00:06:24.324 "bdev_delay_delete", 00:06:24.324 "bdev_delay_create", 00:06:24.324 "bdev_delay_update_latency", 00:06:24.324 "bdev_zone_block_delete", 00:06:24.324 "bdev_zone_block_create", 00:06:24.324 "blobfs_create", 00:06:24.324 "blobfs_detect", 00:06:24.324 "blobfs_set_cache_size", 00:06:24.324 "bdev_aio_delete", 00:06:24.324 "bdev_aio_rescan", 00:06:24.324 "bdev_aio_create", 00:06:24.324 "bdev_ftl_set_property", 00:06:24.324 "bdev_ftl_get_properties", 00:06:24.324 "bdev_ftl_get_stats", 00:06:24.324 "bdev_ftl_unmap", 00:06:24.324 "bdev_ftl_unload", 00:06:24.324 "bdev_ftl_delete", 00:06:24.324 "bdev_ftl_load", 00:06:24.324 "bdev_ftl_create", 00:06:24.324 "bdev_virtio_attach_controller", 00:06:24.324 "bdev_virtio_scsi_get_devices", 00:06:24.324 "bdev_virtio_detach_controller", 00:06:24.324 "bdev_virtio_blk_set_hotplug", 00:06:24.324 "bdev_iscsi_delete", 00:06:24.324 "bdev_iscsi_create", 00:06:24.324 "bdev_iscsi_set_options", 00:06:24.324 "accel_error_inject_error", 00:06:24.324 "ioat_scan_accel_module", 00:06:24.324 "dsa_scan_accel_module", 00:06:24.324 "iaa_scan_accel_module", 00:06:24.324 "vfu_virtio_create_scsi_endpoint", 00:06:24.324 "vfu_virtio_scsi_remove_target", 00:06:24.324 "vfu_virtio_scsi_add_target", 00:06:24.324 "vfu_virtio_create_blk_endpoint", 00:06:24.324 "vfu_virtio_delete_endpoint", 00:06:24.324 "keyring_file_remove_key", 00:06:24.324 "keyring_file_add_key", 00:06:24.324 "iscsi_get_histogram", 00:06:24.324 "iscsi_enable_histogram", 00:06:24.324 "iscsi_set_options", 00:06:24.324 "iscsi_get_auth_groups", 00:06:24.324 "iscsi_auth_group_remove_secret", 00:06:24.324 "iscsi_auth_group_add_secret", 00:06:24.324 "iscsi_delete_auth_group", 00:06:24.324 "iscsi_create_auth_group", 00:06:24.324 "iscsi_set_discovery_auth", 00:06:24.324 "iscsi_get_options", 00:06:24.324 "iscsi_target_node_request_logout", 00:06:24.324 "iscsi_target_node_set_redirect", 00:06:24.324 "iscsi_target_node_set_auth", 00:06:24.324 "iscsi_target_node_add_lun", 00:06:24.324 "iscsi_get_stats", 00:06:24.324 "iscsi_get_connections", 00:06:24.324 "iscsi_portal_group_set_auth", 00:06:24.324 "iscsi_start_portal_group", 00:06:24.324 "iscsi_delete_portal_group", 00:06:24.324 "iscsi_create_portal_group", 00:06:24.324 "iscsi_get_portal_groups", 00:06:24.324 "iscsi_delete_target_node", 00:06:24.324 "iscsi_target_node_remove_pg_ig_maps", 00:06:24.324 "iscsi_target_node_add_pg_ig_maps", 00:06:24.324 "iscsi_create_target_node", 00:06:24.324 "iscsi_get_target_nodes", 00:06:24.324 "iscsi_delete_initiator_group", 00:06:24.324 "iscsi_initiator_group_remove_initiators", 00:06:24.324 "iscsi_initiator_group_add_initiators", 00:06:24.324 "iscsi_create_initiator_group", 00:06:24.324 "iscsi_get_initiator_groups", 00:06:24.324 "nvmf_set_crdt", 00:06:24.324 "nvmf_set_config", 00:06:24.324 "nvmf_set_max_subsystems", 00:06:24.324 "nvmf_subsystem_get_listeners", 00:06:24.324 "nvmf_subsystem_get_qpairs", 00:06:24.324 "nvmf_subsystem_get_controllers", 00:06:24.324 "nvmf_get_stats", 00:06:24.324 "nvmf_get_transports", 00:06:24.324 "nvmf_create_transport", 00:06:24.324 "nvmf_get_targets", 00:06:24.324 "nvmf_delete_target", 00:06:24.324 "nvmf_create_target", 00:06:24.324 "nvmf_subsystem_allow_any_host", 00:06:24.324 "nvmf_subsystem_remove_host", 00:06:24.324 "nvmf_subsystem_add_host", 00:06:24.324 "nvmf_ns_remove_host", 00:06:24.324 "nvmf_ns_add_host", 00:06:24.324 "nvmf_subsystem_remove_ns", 00:06:24.324 "nvmf_subsystem_add_ns", 00:06:24.324 "nvmf_subsystem_listener_set_ana_state", 00:06:24.324 "nvmf_discovery_get_referrals", 00:06:24.324 "nvmf_discovery_remove_referral", 00:06:24.324 "nvmf_discovery_add_referral", 00:06:24.324 "nvmf_subsystem_remove_listener", 00:06:24.324 "nvmf_subsystem_add_listener", 00:06:24.324 "nvmf_delete_subsystem", 00:06:24.324 "nvmf_create_subsystem", 00:06:24.324 "nvmf_get_subsystems", 00:06:24.324 "env_dpdk_get_mem_stats", 00:06:24.324 "nbd_get_disks", 00:06:24.324 "nbd_stop_disk", 00:06:24.324 "nbd_start_disk", 00:06:24.324 "ublk_recover_disk", 00:06:24.324 "ublk_get_disks", 00:06:24.324 "ublk_stop_disk", 00:06:24.324 "ublk_start_disk", 00:06:24.324 "ublk_destroy_target", 00:06:24.324 "ublk_create_target", 00:06:24.324 "virtio_blk_create_transport", 00:06:24.324 "virtio_blk_get_transports", 00:06:24.324 "vhost_controller_set_coalescing", 00:06:24.324 "vhost_get_controllers", 00:06:24.324 "vhost_delete_controller", 00:06:24.324 "vhost_create_blk_controller", 00:06:24.324 "vhost_scsi_controller_remove_target", 00:06:24.324 "vhost_scsi_controller_add_target", 00:06:24.325 "vhost_start_scsi_controller", 00:06:24.325 "vhost_create_scsi_controller", 00:06:24.325 "thread_set_cpumask", 00:06:24.325 "framework_get_scheduler", 00:06:24.325 "framework_set_scheduler", 00:06:24.325 "framework_get_reactors", 00:06:24.325 "thread_get_io_channels", 00:06:24.325 "thread_get_pollers", 00:06:24.325 "thread_get_stats", 00:06:24.325 "framework_monitor_context_switch", 00:06:24.325 "spdk_kill_instance", 00:06:24.325 "log_enable_timestamps", 00:06:24.325 "log_get_flags", 00:06:24.325 "log_clear_flag", 00:06:24.325 "log_set_flag", 00:06:24.325 "log_get_level", 00:06:24.325 "log_set_level", 00:06:24.325 "log_get_print_level", 00:06:24.325 "log_set_print_level", 00:06:24.325 "framework_enable_cpumask_locks", 00:06:24.325 "framework_disable_cpumask_locks", 00:06:24.325 "framework_wait_init", 00:06:24.325 "framework_start_init", 00:06:24.325 "scsi_get_devices", 00:06:24.325 "bdev_get_histogram", 00:06:24.325 "bdev_enable_histogram", 00:06:24.325 "bdev_set_qos_limit", 00:06:24.325 "bdev_set_qd_sampling_period", 00:06:24.325 "bdev_get_bdevs", 00:06:24.325 "bdev_reset_iostat", 00:06:24.325 "bdev_get_iostat", 00:06:24.325 "bdev_examine", 00:06:24.325 "bdev_wait_for_examine", 00:06:24.325 "bdev_set_options", 00:06:24.325 "notify_get_notifications", 00:06:24.325 "notify_get_types", 00:06:24.325 "accel_get_stats", 00:06:24.325 "accel_set_options", 00:06:24.325 "accel_set_driver", 00:06:24.325 "accel_crypto_key_destroy", 00:06:24.325 "accel_crypto_keys_get", 00:06:24.325 "accel_crypto_key_create", 00:06:24.325 "accel_assign_opc", 00:06:24.325 "accel_get_module_info", 00:06:24.325 "accel_get_opc_assignments", 00:06:24.325 "vmd_rescan", 00:06:24.325 "vmd_remove_device", 00:06:24.325 "vmd_enable", 00:06:24.325 "sock_get_default_impl", 00:06:24.325 "sock_set_default_impl", 00:06:24.325 "sock_impl_set_options", 00:06:24.325 "sock_impl_get_options", 00:06:24.325 "iobuf_get_stats", 00:06:24.325 "iobuf_set_options", 00:06:24.325 "keyring_get_keys", 00:06:24.325 "framework_get_pci_devices", 00:06:24.325 "framework_get_config", 00:06:24.325 "framework_get_subsystems", 00:06:24.325 "vfu_tgt_set_base_path", 00:06:24.325 "trace_get_info", 00:06:24.325 "trace_get_tpoint_group_mask", 00:06:24.325 "trace_disable_tpoint_group", 00:06:24.325 "trace_enable_tpoint_group", 00:06:24.325 "trace_clear_tpoint_mask", 00:06:24.325 "trace_set_tpoint_mask", 00:06:24.325 "spdk_get_version", 00:06:24.325 "rpc_get_methods" 00:06:24.325 ] 00:06:24.325 02:47:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:24.325 02:47:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.325 02:47:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.325 02:47:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.325 02:47:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 227127 00:06:24.325 02:47:15 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 227127 ']' 00:06:24.325 02:47:15 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 227127 00:06:24.325 02:47:15 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:24.325 02:47:15 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:24.325 02:47:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 227127 00:06:24.583 02:47:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:24.583 02:47:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:24.583 02:47:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 227127' 00:06:24.583 killing process with pid 227127 00:06:24.583 02:47:15 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 227127 00:06:24.583 02:47:15 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 227127 00:06:24.842 00:06:24.842 real 0m1.188s 00:06:24.842 user 0m2.101s 00:06:24.842 sys 0m0.439s 00:06:24.842 02:47:15 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.842 02:47:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.842 ************************************ 00:06:24.842 END TEST spdkcli_tcp 00:06:24.842 ************************************ 00:06:24.842 02:47:15 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.842 02:47:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.842 02:47:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.842 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:24.842 ************************************ 00:06:24.842 START TEST dpdk_mem_utility 00:06:24.842 ************************************ 00:06:24.842 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.101 * Looking for test storage... 00:06:25.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:25.101 02:47:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.101 02:47:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=227334 00:06:25.101 02:47:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:25.101 02:47:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 227334 00:06:25.101 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 227334 ']' 00:06:25.101 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.101 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.101 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.101 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.101 02:47:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.101 [2024-05-13 02:47:15.700489] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:25.101 [2024-05-13 02:47:15.700570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227334 ] 00:06:25.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.102 [2024-05-13 02:47:15.732565] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.102 [2024-05-13 02:47:15.761439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.102 [2024-05-13 02:47:15.856608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.361 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.361 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:25.361 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:25.361 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:25.361 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.361 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.361 { 00:06:25.361 "filename": "/tmp/spdk_mem_dump.txt" 00:06:25.361 } 00:06:25.361 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.361 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.620 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:25.620 1 heaps totaling size 814.000000 MiB 00:06:25.620 size: 814.000000 MiB heap id: 0 00:06:25.620 end heaps---------- 00:06:25.620 8 mempools totaling size 598.116089 MiB 00:06:25.620 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:25.620 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:25.620 size: 84.521057 MiB name: bdev_io_227334 00:06:25.620 size: 51.011292 MiB name: evtpool_227334 00:06:25.620 size: 50.003479 MiB name: msgpool_227334 00:06:25.620 size: 21.763794 MiB name: PDU_Pool 00:06:25.620 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:25.620 size: 0.026123 MiB name: Session_Pool 00:06:25.620 end mempools------- 00:06:25.620 6 memzones totaling size 4.142822 MiB 00:06:25.620 size: 1.000366 MiB name: RG_ring_0_227334 00:06:25.620 size: 1.000366 MiB name: RG_ring_1_227334 00:06:25.620 size: 1.000366 MiB name: RG_ring_4_227334 00:06:25.620 size: 1.000366 MiB name: RG_ring_5_227334 00:06:25.620 size: 0.125366 MiB name: RG_ring_2_227334 00:06:25.620 size: 0.015991 MiB name: RG_ring_3_227334 00:06:25.620 end memzones------- 00:06:25.620 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:25.620 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:25.620 list of free elements. size: 12.519348 MiB 00:06:25.620 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:25.620 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:25.620 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:25.620 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:25.620 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:25.620 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:25.620 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:25.620 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:25.620 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:25.620 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:25.620 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:25.620 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:25.620 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:25.620 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:25.620 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:25.620 list of standard malloc elements. size: 199.218079 MiB 00:06:25.620 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:25.620 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:25.620 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:25.620 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:25.620 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:25.620 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:25.620 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:25.620 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:25.620 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:25.620 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:25.620 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:25.620 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:25.620 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:25.620 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:25.620 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:25.620 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:25.620 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:25.620 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:25.620 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:25.620 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:25.620 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:25.620 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:25.620 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:25.620 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:25.620 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:25.620 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:25.620 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:25.620 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:25.620 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:25.620 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:25.620 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:25.621 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:25.621 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:25.621 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:25.621 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:25.621 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:25.621 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:25.621 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:25.621 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:25.621 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:25.621 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:25.621 list of memzone associated elements. size: 602.262573 MiB 00:06:25.621 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:25.621 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:25.621 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:25.621 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:25.621 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:25.621 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_227334_0 00:06:25.621 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:25.621 associated memzone info: size: 48.002930 MiB name: MP_evtpool_227334_0 00:06:25.621 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:25.621 associated memzone info: size: 48.002930 MiB name: MP_msgpool_227334_0 00:06:25.621 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:25.621 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:25.621 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:25.621 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:25.621 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:25.621 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_227334 00:06:25.621 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:25.621 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_227334 00:06:25.621 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:25.621 associated memzone info: size: 1.007996 MiB name: MP_evtpool_227334 00:06:25.621 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:25.621 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:25.621 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:25.621 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:25.621 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:25.621 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:25.621 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:25.621 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:25.621 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:25.621 associated memzone info: size: 1.000366 MiB name: RG_ring_0_227334 00:06:25.621 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:25.621 associated memzone info: size: 1.000366 MiB name: RG_ring_1_227334 00:06:25.621 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:25.621 associated memzone info: size: 1.000366 MiB name: RG_ring_4_227334 00:06:25.621 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:25.621 associated memzone info: size: 1.000366 MiB name: RG_ring_5_227334 00:06:25.621 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:25.621 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_227334 00:06:25.621 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:25.621 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:25.621 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:25.621 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:25.621 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:25.621 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:25.621 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:25.621 associated memzone info: size: 0.125366 MiB name: RG_ring_2_227334 00:06:25.621 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:25.621 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:25.621 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:25.621 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:25.621 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:25.621 associated memzone info: size: 0.015991 MiB name: RG_ring_3_227334 00:06:25.621 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:25.621 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:25.621 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:25.621 associated memzone info: size: 0.000183 MiB name: MP_msgpool_227334 00:06:25.621 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:25.621 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_227334 00:06:25.621 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:25.621 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:25.621 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:25.621 02:47:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 227334 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 227334 ']' 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 227334 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 227334 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 227334' 00:06:25.621 killing process with pid 227334 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 227334 00:06:25.621 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 227334 00:06:25.880 00:06:25.880 real 0m1.050s 00:06:25.880 user 0m1.006s 00:06:25.880 sys 0m0.408s 00:06:25.880 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.880 02:47:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.880 ************************************ 00:06:25.880 END TEST dpdk_mem_utility 00:06:25.880 ************************************ 00:06:25.880 02:47:16 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:25.880 02:47:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.880 02:47:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.880 02:47:16 -- common/autotest_common.sh@10 -- # set +x 00:06:26.138 ************************************ 00:06:26.138 START TEST event 00:06:26.138 ************************************ 00:06:26.138 02:47:16 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:26.138 * Looking for test storage... 00:06:26.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:26.138 02:47:16 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:26.138 02:47:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:26.138 02:47:16 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.138 02:47:16 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:26.138 02:47:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.138 02:47:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.138 ************************************ 00:06:26.138 START TEST event_perf 00:06:26.138 ************************************ 00:06:26.138 02:47:16 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.138 Running I/O for 1 seconds...[2024-05-13 02:47:16.800487] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:26.138 [2024-05-13 02:47:16.800548] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227524 ] 00:06:26.138 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.138 [2024-05-13 02:47:16.836314] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.138 [2024-05-13 02:47:16.866708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.397 [2024-05-13 02:47:16.963206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.397 [2024-05-13 02:47:16.963258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.397 [2024-05-13 02:47:16.963376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.397 [2024-05-13 02:47:16.963378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.335 Running I/O for 1 seconds... 00:06:27.335 lcore 0: 231631 00:06:27.335 lcore 1: 231629 00:06:27.335 lcore 2: 231630 00:06:27.335 lcore 3: 231631 00:06:27.335 done. 00:06:27.335 00:06:27.335 real 0m1.257s 00:06:27.335 user 0m4.166s 00:06:27.335 sys 0m0.087s 00:06:27.335 02:47:18 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.335 02:47:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.335 ************************************ 00:06:27.335 END TEST event_perf 00:06:27.335 ************************************ 00:06:27.335 02:47:18 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.335 02:47:18 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:27.335 02:47:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.335 02:47:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.335 ************************************ 00:06:27.335 START TEST event_reactor 00:06:27.335 ************************************ 00:06:27.335 02:47:18 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.335 [2024-05-13 02:47:18.109298] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:27.335 [2024-05-13 02:47:18.109356] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227687 ] 00:06:27.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.594 [2024-05-13 02:47:18.141469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.594 [2024-05-13 02:47:18.169557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.594 [2024-05-13 02:47:18.263120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.977 test_start 00:06:28.977 oneshot 00:06:28.977 tick 100 00:06:28.977 tick 100 00:06:28.977 tick 250 00:06:28.977 tick 100 00:06:28.977 tick 100 00:06:28.977 tick 100 00:06:28.977 tick 250 00:06:28.977 tick 500 00:06:28.977 tick 100 00:06:28.977 tick 100 00:06:28.977 tick 250 00:06:28.977 tick 100 00:06:28.977 tick 100 00:06:28.977 test_end 00:06:28.977 00:06:28.977 real 0m1.247s 00:06:28.977 user 0m1.155s 00:06:28.977 sys 0m0.088s 00:06:28.977 02:47:19 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.977 02:47:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:28.977 ************************************ 00:06:28.977 END TEST event_reactor 00:06:28.977 ************************************ 00:06:28.977 02:47:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.977 02:47:19 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:28.977 02:47:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.977 02:47:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.977 ************************************ 00:06:28.977 START TEST event_reactor_perf 00:06:28.977 ************************************ 00:06:28.977 02:47:19 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.977 [2024-05-13 02:47:19.414842] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:28.977 [2024-05-13 02:47:19.414916] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227843 ] 00:06:28.977 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.977 [2024-05-13 02:47:19.448359] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.977 [2024-05-13 02:47:19.482157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.977 [2024-05-13 02:47:19.573083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.924 test_start 00:06:29.924 test_end 00:06:29.924 Performance: 353974 events per second 00:06:29.924 00:06:29.924 real 0m1.252s 00:06:29.924 user 0m1.155s 00:06:29.924 sys 0m0.092s 00:06:29.924 02:47:20 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.924 02:47:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.924 ************************************ 00:06:29.924 END TEST event_reactor_perf 00:06:29.924 ************************************ 00:06:29.924 02:47:20 event -- event/event.sh@49 -- # uname -s 00:06:29.924 02:47:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:29.924 02:47:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:29.924 02:47:20 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.924 02:47:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.924 02:47:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.924 ************************************ 00:06:29.924 START TEST event_scheduler 00:06:29.924 ************************************ 00:06:29.924 02:47:20 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.184 * Looking for test storage... 00:06:30.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:30.184 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:30.184 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=228027 00:06:30.184 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:30.184 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.184 02:47:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 228027 00:06:30.184 02:47:20 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 228027 ']' 00:06:30.184 02:47:20 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.184 02:47:20 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.184 02:47:20 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.184 02:47:20 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.184 02:47:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 [2024-05-13 02:47:20.799406] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:30.184 [2024-05-13 02:47:20.799493] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228027 ] 00:06:30.184 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.184 [2024-05-13 02:47:20.831213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.184 [2024-05-13 02:47:20.858393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.184 [2024-05-13 02:47:20.945255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.184 [2024-05-13 02:47:20.945311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.184 [2024-05-13 02:47:20.945377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.184 [2024-05-13 02:47:20.945379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:30.467 02:47:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 POWER: Env isn't set yet! 00:06:30.467 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:30.467 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:30.467 POWER: Cannot get available frequencies of lcore 0 00:06:30.467 POWER: Attempting to initialise PSTAT power management... 00:06:30.467 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:30.467 POWER: Initialized successfully for lcore 0 power management 00:06:30.467 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:30.467 POWER: Initialized successfully for lcore 1 power management 00:06:30.467 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:30.467 POWER: Initialized successfully for lcore 2 power management 00:06:30.467 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:30.467 POWER: Initialized successfully for lcore 3 power management 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.467 02:47:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 [2024-05-13 02:47:21.154562] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.467 02:47:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 ************************************ 00:06:30.467 START TEST scheduler_create_thread 00:06:30.467 ************************************ 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 2 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 3 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 4 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 5 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 6 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 7 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 8 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.467 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.727 9 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.727 10 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.727 02:47:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.131 02:47:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.131 02:47:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:32.131 02:47:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:32.131 02:47:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.131 02:47:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.071 02:47:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.071 00:06:33.071 real 0m2.619s 00:06:33.071 user 0m0.010s 00:06:33.071 sys 0m0.005s 00:06:33.071 02:47:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.071 02:47:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.071 ************************************ 00:06:33.071 END TEST scheduler_create_thread 00:06:33.071 ************************************ 00:06:33.071 02:47:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:33.071 02:47:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 228027 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 228027 ']' 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 228027 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 228027 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 228027' 00:06:33.071 killing process with pid 228027 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 228027 00:06:33.071 02:47:23 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 228027 00:06:33.639 [2024-05-13 02:47:24.289765] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:33.639 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:33.639 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:33.639 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:33.639 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:33.639 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:33.639 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:33.640 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:33.640 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:33.898 00:06:33.898 real 0m3.809s 00:06:33.898 user 0m5.816s 00:06:33.898 sys 0m0.339s 00:06:33.898 02:47:24 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.898 02:47:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.898 ************************************ 00:06:33.898 END TEST event_scheduler 00:06:33.898 ************************************ 00:06:33.898 02:47:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:33.899 02:47:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:33.899 02:47:24 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.899 02:47:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.899 02:47:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.899 ************************************ 00:06:33.899 START TEST app_repeat 00:06:33.899 ************************************ 00:06:33.899 02:47:24 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=228592 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 228592' 00:06:33.899 Process app_repeat pid: 228592 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:33.899 spdk_app_start Round 0 00:06:33.899 02:47:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 228592 /var/tmp/spdk-nbd.sock 00:06:33.899 02:47:24 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 228592 ']' 00:06:33.899 02:47:24 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.899 02:47:24 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.899 02:47:24 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.899 02:47:24 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.899 02:47:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.899 [2024-05-13 02:47:24.602854] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:33.899 [2024-05-13 02:47:24.602921] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228592 ] 00:06:33.899 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.899 [2024-05-13 02:47:24.635097] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.899 [2024-05-13 02:47:24.666685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.157 [2024-05-13 02:47:24.760243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.157 [2024-05-13 02:47:24.760248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.157 02:47:24 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.157 02:47:24 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:34.157 02:47:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.415 Malloc0 00:06:34.415 02:47:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.673 Malloc1 00:06:34.673 02:47:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.673 02:47:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.931 /dev/nbd0 00:06:34.931 02:47:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.931 02:47:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.931 1+0 records in 00:06:34.931 1+0 records out 00:06:34.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182612 s, 22.4 MB/s 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:34.931 02:47:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:34.931 02:47:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.931 02:47:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.931 02:47:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.190 /dev/nbd1 00:06:35.190 02:47:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.190 02:47:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.190 1+0 records in 00:06:35.190 1+0 records out 00:06:35.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191179 s, 21.4 MB/s 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:35.190 02:47:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:35.190 02:47:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.190 02:47:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.190 02:47:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.190 02:47:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.190 02:47:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.449 { 00:06:35.449 "nbd_device": "/dev/nbd0", 00:06:35.449 "bdev_name": "Malloc0" 00:06:35.449 }, 00:06:35.449 { 00:06:35.449 "nbd_device": "/dev/nbd1", 00:06:35.449 "bdev_name": "Malloc1" 00:06:35.449 } 00:06:35.449 ]' 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.449 { 00:06:35.449 "nbd_device": "/dev/nbd0", 00:06:35.449 "bdev_name": "Malloc0" 00:06:35.449 }, 00:06:35.449 { 00:06:35.449 "nbd_device": "/dev/nbd1", 00:06:35.449 "bdev_name": "Malloc1" 00:06:35.449 } 00:06:35.449 ]' 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.449 /dev/nbd1' 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.449 /dev/nbd1' 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.449 256+0 records in 00:06:35.449 256+0 records out 00:06:35.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005053 s, 208 MB/s 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.449 02:47:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.707 256+0 records in 00:06:35.707 256+0 records out 00:06:35.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235967 s, 44.4 MB/s 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.707 256+0 records in 00:06:35.707 256+0 records out 00:06:35.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253638 s, 41.3 MB/s 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.707 02:47:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.966 02:47:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.234 02:47:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.492 02:47:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.492 02:47:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.752 02:47:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.011 [2024-05-13 02:47:27.626251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.011 [2024-05-13 02:47:27.716804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.011 [2024-05-13 02:47:27.716804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.011 [2024-05-13 02:47:27.775097] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.011 [2024-05-13 02:47:27.775170] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.317 02:47:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.317 02:47:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:40.317 spdk_app_start Round 1 00:06:40.317 02:47:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 228592 /var/tmp/spdk-nbd.sock 00:06:40.317 02:47:30 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 228592 ']' 00:06:40.317 02:47:30 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.317 02:47:30 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.317 02:47:30 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.317 02:47:30 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.317 02:47:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.317 02:47:30 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.317 02:47:30 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:40.317 02:47:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.317 Malloc0 00:06:40.317 02:47:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.575 Malloc1 00:06:40.575 02:47:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.575 02:47:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.576 02:47:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.576 02:47:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.576 02:47:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.576 02:47:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.576 02:47:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.833 /dev/nbd0 00:06:40.833 02:47:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.833 02:47:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.833 1+0 records in 00:06:40.833 1+0 records out 00:06:40.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182361 s, 22.5 MB/s 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:40.833 02:47:31 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:40.833 02:47:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.833 02:47:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.833 02:47:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.091 /dev/nbd1 00:06:41.091 02:47:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.091 02:47:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.091 1+0 records in 00:06:41.091 1+0 records out 00:06:41.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019354 s, 21.2 MB/s 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:41.091 02:47:31 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:41.091 02:47:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.091 02:47:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.091 02:47:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.091 02:47:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.091 02:47:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.349 { 00:06:41.349 "nbd_device": "/dev/nbd0", 00:06:41.349 "bdev_name": "Malloc0" 00:06:41.349 }, 00:06:41.349 { 00:06:41.349 "nbd_device": "/dev/nbd1", 00:06:41.349 "bdev_name": "Malloc1" 00:06:41.349 } 00:06:41.349 ]' 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.349 { 00:06:41.349 "nbd_device": "/dev/nbd0", 00:06:41.349 "bdev_name": "Malloc0" 00:06:41.349 }, 00:06:41.349 { 00:06:41.349 "nbd_device": "/dev/nbd1", 00:06:41.349 "bdev_name": "Malloc1" 00:06:41.349 } 00:06:41.349 ]' 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.349 /dev/nbd1' 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.349 /dev/nbd1' 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.349 02:47:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.349 02:47:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.349 02:47:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.349 02:47:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.349 02:47:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.349 02:47:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.349 02:47:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.349 256+0 records in 00:06:41.349 256+0 records out 00:06:41.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050485 s, 208 MB/s 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.350 256+0 records in 00:06:41.350 256+0 records out 00:06:41.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235665 s, 44.5 MB/s 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.350 256+0 records in 00:06:41.350 256+0 records out 00:06:41.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253816 s, 41.3 MB/s 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.350 02:47:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.609 02:47:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.867 02:47:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.124 02:47:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.124 02:47:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.124 02:47:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.125 02:47:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.125 02:47:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.125 02:47:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.125 02:47:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.125 02:47:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.125 02:47:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.125 02:47:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.125 02:47:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.125 02:47:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.125 02:47:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.384 02:47:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.643 [2024-05-13 02:47:33.353843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.643 [2024-05-13 02:47:33.443421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.643 [2024-05-13 02:47:33.443424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.901 [2024-05-13 02:47:33.502367] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.901 [2024-05-13 02:47:33.502450] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.437 02:47:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.437 02:47:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:45.437 spdk_app_start Round 2 00:06:45.437 02:47:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 228592 /var/tmp/spdk-nbd.sock 00:06:45.437 02:47:36 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 228592 ']' 00:06:45.437 02:47:36 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.437 02:47:36 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.437 02:47:36 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.437 02:47:36 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.437 02:47:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.694 02:47:36 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.694 02:47:36 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:45.694 02:47:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.950 Malloc0 00:06:45.950 02:47:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.208 Malloc1 00:06:46.208 02:47:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.208 02:47:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.466 /dev/nbd0 00:06:46.466 02:47:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.466 02:47:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.466 1+0 records in 00:06:46.466 1+0 records out 00:06:46.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163825 s, 25.0 MB/s 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:46.466 02:47:37 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:46.466 02:47:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.466 02:47:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.466 02:47:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.724 /dev/nbd1 00:06:46.724 02:47:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.724 02:47:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.724 1+0 records in 00:06:46.724 1+0 records out 00:06:46.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169618 s, 24.1 MB/s 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:46.724 02:47:37 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:46.724 02:47:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.724 02:47:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.724 02:47:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.724 02:47:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.724 02:47:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.982 { 00:06:46.982 "nbd_device": "/dev/nbd0", 00:06:46.982 "bdev_name": "Malloc0" 00:06:46.982 }, 00:06:46.982 { 00:06:46.982 "nbd_device": "/dev/nbd1", 00:06:46.982 "bdev_name": "Malloc1" 00:06:46.982 } 00:06:46.982 ]' 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.982 { 00:06:46.982 "nbd_device": "/dev/nbd0", 00:06:46.982 "bdev_name": "Malloc0" 00:06:46.982 }, 00:06:46.982 { 00:06:46.982 "nbd_device": "/dev/nbd1", 00:06:46.982 "bdev_name": "Malloc1" 00:06:46.982 } 00:06:46.982 ]' 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.982 /dev/nbd1' 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.982 /dev/nbd1' 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.982 256+0 records in 00:06:46.982 256+0 records out 00:06:46.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496999 s, 211 MB/s 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.982 02:47:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.293 256+0 records in 00:06:47.293 256+0 records out 00:06:47.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023687 s, 44.3 MB/s 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.293 256+0 records in 00:06:47.293 256+0 records out 00:06:47.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226615 s, 46.3 MB/s 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.293 02:47:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.580 02:47:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.839 02:47:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.839 02:47:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.839 02:47:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.839 02:47:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.839 02:47:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.839 02:47:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.100 02:47:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:48.100 02:47:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.100 02:47:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.100 02:47:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.100 02:47:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.100 02:47:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.100 02:47:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.360 02:47:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.360 [2024-05-13 02:47:39.136434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.619 [2024-05-13 02:47:39.226327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.619 [2024-05-13 02:47:39.226327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.619 [2024-05-13 02:47:39.288970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.619 [2024-05-13 02:47:39.289061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.156 02:47:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 228592 /var/tmp/spdk-nbd.sock 00:06:51.156 02:47:41 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 228592 ']' 00:06:51.156 02:47:41 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.156 02:47:41 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.156 02:47:41 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.156 02:47:41 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.156 02:47:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:51.416 02:47:42 event.app_repeat -- event/event.sh@39 -- # killprocess 228592 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 228592 ']' 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 228592 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 228592 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 228592' 00:06:51.416 killing process with pid 228592 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@965 -- # kill 228592 00:06:51.416 02:47:42 event.app_repeat -- common/autotest_common.sh@970 -- # wait 228592 00:06:51.675 spdk_app_start is called in Round 0. 00:06:51.675 Shutdown signal received, stop current app iteration 00:06:51.675 Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 reinitialization... 00:06:51.675 spdk_app_start is called in Round 1. 00:06:51.675 Shutdown signal received, stop current app iteration 00:06:51.675 Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 reinitialization... 00:06:51.675 spdk_app_start is called in Round 2. 00:06:51.675 Shutdown signal received, stop current app iteration 00:06:51.675 Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 reinitialization... 00:06:51.675 spdk_app_start is called in Round 3. 00:06:51.675 Shutdown signal received, stop current app iteration 00:06:51.675 02:47:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:51.676 02:47:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:51.676 00:06:51.676 real 0m17.824s 00:06:51.676 user 0m39.215s 00:06:51.676 sys 0m3.353s 00:06:51.676 02:47:42 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.676 02:47:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.676 ************************************ 00:06:51.676 END TEST app_repeat 00:06:51.676 ************************************ 00:06:51.676 02:47:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:51.676 02:47:42 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:51.676 02:47:42 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.676 02:47:42 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.676 02:47:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.676 ************************************ 00:06:51.676 START TEST cpu_locks 00:06:51.676 ************************************ 00:06:51.676 02:47:42 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:51.936 * Looking for test storage... 00:06:51.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:51.936 02:47:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:51.936 02:47:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:51.936 02:47:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:51.936 02:47:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:51.936 02:47:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.936 02:47:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.937 02:47:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.937 ************************************ 00:06:51.937 START TEST default_locks 00:06:51.937 ************************************ 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=230923 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 230923 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 230923 ']' 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.937 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.937 [2024-05-13 02:47:42.594152] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:51.937 [2024-05-13 02:47:42.594242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230923 ] 00:06:51.937 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.937 [2024-05-13 02:47:42.626302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.937 [2024-05-13 02:47:42.652135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.197 [2024-05-13 02:47:42.740639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.197 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.197 02:47:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:52.197 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 230923 00:06:52.197 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.197 02:47:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 230923 00:06:52.764 lslocks: write error 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 230923 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 230923 ']' 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 230923 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 230923 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 230923' 00:06:52.764 killing process with pid 230923 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 230923 00:06:52.764 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 230923 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 230923 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 230923 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 230923 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 230923 ']' 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (230923) - No such process 00:06:53.024 ERROR: process (pid: 230923) is no longer running 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.024 00:06:53.024 real 0m1.176s 00:06:53.024 user 0m1.102s 00:06:53.024 sys 0m0.544s 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.024 02:47:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 ************************************ 00:06:53.024 END TEST default_locks 00:06:53.024 ************************************ 00:06:53.024 02:47:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:53.024 02:47:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:53.024 02:47:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.024 02:47:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 ************************************ 00:06:53.024 START TEST default_locks_via_rpc 00:06:53.024 ************************************ 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=231109 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 231109 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 231109 ']' 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.024 02:47:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 [2024-05-13 02:47:43.820647] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:53.024 [2024-05-13 02:47:43.820780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231109 ] 00:06:53.283 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.283 [2024-05-13 02:47:43.854030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.283 [2024-05-13 02:47:43.880249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.283 [2024-05-13 02:47:43.965183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 231109 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 231109 00:06:53.542 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 231109 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 231109 ']' 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 231109 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 231109 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 231109' 00:06:53.802 killing process with pid 231109 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 231109 00:06:53.802 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 231109 00:06:54.371 00:06:54.372 real 0m1.191s 00:06:54.372 user 0m1.126s 00:06:54.372 sys 0m0.524s 00:06:54.372 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.372 02:47:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.372 ************************************ 00:06:54.372 END TEST default_locks_via_rpc 00:06:54.372 ************************************ 00:06:54.372 02:47:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:54.372 02:47:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.372 02:47:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.372 02:47:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.372 ************************************ 00:06:54.372 START TEST non_locking_app_on_locked_coremask 00:06:54.372 ************************************ 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=231279 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 231279 /var/tmp/spdk.sock 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 231279 ']' 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.372 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.372 [2024-05-13 02:47:45.060938] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:54.372 [2024-05-13 02:47:45.061027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231279 ] 00:06:54.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.372 [2024-05-13 02:47:45.093915] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.372 [2024-05-13 02:47:45.124794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.630 [2024-05-13 02:47:45.220311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=231285 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 231285 /var/tmp/spdk2.sock 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 231285 ']' 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.888 02:47:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.888 [2024-05-13 02:47:45.521216] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:54.888 [2024-05-13 02:47:45.521298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231285 ] 00:06:54.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.888 [2024-05-13 02:47:45.556414] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.888 [2024-05-13 02:47:45.611894] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.888 [2024-05-13 02:47:45.611924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.146 [2024-05-13 02:47:45.798775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.714 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.715 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:55.715 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 231279 00:06:55.715 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 231279 00:06:55.715 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.283 lslocks: write error 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 231279 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 231279 ']' 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 231279 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 231279 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 231279' 00:06:56.283 killing process with pid 231279 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 231279 00:06:56.283 02:47:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 231279 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 231285 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 231285 ']' 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 231285 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 231285 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 231285' 00:06:57.224 killing process with pid 231285 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 231285 00:06:57.224 02:47:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 231285 00:06:57.483 00:06:57.483 real 0m3.222s 00:06:57.483 user 0m3.384s 00:06:57.483 sys 0m1.042s 00:06:57.483 02:47:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.483 02:47:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.483 ************************************ 00:06:57.483 END TEST non_locking_app_on_locked_coremask 00:06:57.483 ************************************ 00:06:57.483 02:47:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:57.483 02:47:48 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.483 02:47:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.483 02:47:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.744 ************************************ 00:06:57.744 START TEST locking_app_on_unlocked_coremask 00:06:57.744 ************************************ 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=231712 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 231712 /var/tmp/spdk.sock 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 231712 ']' 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.744 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.744 [2024-05-13 02:47:48.342265] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:57.744 [2024-05-13 02:47:48.342355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231712 ] 00:06:57.744 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.744 [2024-05-13 02:47:48.374426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.744 [2024-05-13 02:47:48.400388] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.744 [2024-05-13 02:47:48.400412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.744 [2024-05-13 02:47:48.486414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=231720 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 231720 /var/tmp/spdk2.sock 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 231720 ']' 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:58.003 02:47:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.003 [2024-05-13 02:47:48.787826] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:58.003 [2024-05-13 02:47:48.787913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231720 ] 00:06:58.263 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.263 [2024-05-13 02:47:48.821475] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.263 [2024-05-13 02:47:48.878903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.522 [2024-05-13 02:47:49.070058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.089 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.089 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:59.089 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 231720 00:06:59.089 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 231720 00:06:59.089 02:47:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.349 lslocks: write error 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 231712 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 231712 ']' 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 231712 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 231712 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 231712' 00:06:59.349 killing process with pid 231712 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 231712 00:06:59.349 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 231712 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 231720 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 231720 ']' 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 231720 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 231720 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 231720' 00:07:00.288 killing process with pid 231720 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 231720 00:07:00.288 02:47:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 231720 00:07:00.858 00:07:00.858 real 0m3.069s 00:07:00.858 user 0m3.192s 00:07:00.858 sys 0m1.030s 00:07:00.858 02:47:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.858 02:47:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.858 ************************************ 00:07:00.858 END TEST locking_app_on_unlocked_coremask 00:07:00.858 ************************************ 00:07:00.858 02:47:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:00.858 02:47:51 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.858 02:47:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.858 02:47:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.858 ************************************ 00:07:00.858 START TEST locking_app_on_locked_coremask 00:07:00.858 ************************************ 00:07:00.858 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:00.858 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=232031 00:07:00.859 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.859 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 232031 /var/tmp/spdk.sock 00:07:00.859 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 232031 ']' 00:07:00.859 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.859 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.859 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.859 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.859 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.859 [2024-05-13 02:47:51.467927] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:00.859 [2024-05-13 02:47:51.468019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232031 ] 00:07:00.859 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.859 [2024-05-13 02:47:51.500654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.859 [2024-05-13 02:47:51.526524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.859 [2024-05-13 02:47:51.613632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.117 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.117 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:01.117 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=232154 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 232154 /var/tmp/spdk2.sock 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 232154 /var/tmp/spdk2.sock 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 232154 /var/tmp/spdk2.sock 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 232154 ']' 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.118 02:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.118 [2024-05-13 02:47:51.902251] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:01.118 [2024-05-13 02:47:51.902321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232154 ] 00:07:01.382 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.382 [2024-05-13 02:47:51.937455] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.382 [2024-05-13 02:47:51.995935] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 232031 has claimed it. 00:07:01.382 [2024-05-13 02:47:51.995984] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (232154) - No such process 00:07:01.997 ERROR: process (pid: 232154) is no longer running 00:07:01.997 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.997 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:01.997 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:01.997 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.997 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.997 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.997 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 232031 00:07:01.997 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 232031 00:07:01.997 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.257 lslocks: write error 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 232031 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 232031 ']' 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 232031 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 232031 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 232031' 00:07:02.258 killing process with pid 232031 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 232031 00:07:02.258 02:47:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 232031 00:07:02.827 00:07:02.827 real 0m1.943s 00:07:02.827 user 0m2.104s 00:07:02.827 sys 0m0.631s 00:07:02.827 02:47:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.827 02:47:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.827 ************************************ 00:07:02.827 END TEST locking_app_on_locked_coremask 00:07:02.827 ************************************ 00:07:02.827 02:47:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:02.827 02:47:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.827 02:47:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.828 02:47:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.828 ************************************ 00:07:02.828 START TEST locking_overlapped_coremask 00:07:02.828 ************************************ 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=232327 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 232327 /var/tmp/spdk.sock 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 232327 ']' 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.828 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.828 [2024-05-13 02:47:53.468804] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:02.828 [2024-05-13 02:47:53.468885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232327 ] 00:07:02.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.828 [2024-05-13 02:47:53.500638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.828 [2024-05-13 02:47:53.530400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.828 [2024-05-13 02:47:53.623852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.828 [2024-05-13 02:47:53.623906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.828 [2024-05-13 02:47:53.623924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=232384 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 232384 /var/tmp/spdk2.sock 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 232384 /var/tmp/spdk2.sock 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 232384 /var/tmp/spdk2.sock 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 232384 ']' 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.085 02:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.344 [2024-05-13 02:47:53.924346] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:03.344 [2024-05-13 02:47:53.924451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232384 ] 00:07:03.344 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.344 [2024-05-13 02:47:53.960712] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.344 [2024-05-13 02:47:54.015457] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 232327 has claimed it. 00:07:03.344 [2024-05-13 02:47:54.015511] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (232384) - No such process 00:07:03.913 ERROR: process (pid: 232384) is no longer running 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 232327 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 232327 ']' 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 232327 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 232327 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 232327' 00:07:03.913 killing process with pid 232327 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 232327 00:07:03.913 02:47:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 232327 00:07:04.480 00:07:04.480 real 0m1.636s 00:07:04.480 user 0m4.404s 00:07:04.480 sys 0m0.475s 00:07:04.480 02:47:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.480 02:47:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.481 ************************************ 00:07:04.481 END TEST locking_overlapped_coremask 00:07:04.481 ************************************ 00:07:04.481 02:47:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:04.481 02:47:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:04.481 02:47:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.481 02:47:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.481 ************************************ 00:07:04.481 START TEST locking_overlapped_coremask_via_rpc 00:07:04.481 ************************************ 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=232622 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 232622 /var/tmp/spdk.sock 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 232622 ']' 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.481 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.481 [2024-05-13 02:47:55.152061] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:04.481 [2024-05-13 02:47:55.152150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232622 ] 00:07:04.481 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.481 [2024-05-13 02:47:55.184432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.481 [2024-05-13 02:47:55.214895] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.481 [2024-05-13 02:47:55.214925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.741 [2024-05-13 02:47:55.309923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.741 [2024-05-13 02:47:55.309978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.741 [2024-05-13 02:47:55.309996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=232632 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 232632 /var/tmp/spdk2.sock 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 232632 ']' 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:05.000 02:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.000 [2024-05-13 02:47:55.607575] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:05.000 [2024-05-13 02:47:55.607658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232632 ] 00:07:05.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.000 [2024-05-13 02:47:55.641894] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.000 [2024-05-13 02:47:55.697692] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.000 [2024-05-13 02:47:55.697722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.260 [2024-05-13 02:47:55.874753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.260 [2024-05-13 02:47:55.874815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:05.260 [2024-05-13 02:47:55.874817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.826 [2024-05-13 02:47:56.555798] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 232622 has claimed it. 00:07:05.826 request: 00:07:05.826 { 00:07:05.826 "method": "framework_enable_cpumask_locks", 00:07:05.826 "req_id": 1 00:07:05.826 } 00:07:05.826 Got JSON-RPC error response 00:07:05.826 response: 00:07:05.826 { 00:07:05.826 "code": -32603, 00:07:05.826 "message": "Failed to claim CPU core: 2" 00:07:05.826 } 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 232622 /var/tmp/spdk.sock 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 232622 ']' 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:05.826 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.084 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.084 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:06.084 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 232632 /var/tmp/spdk2.sock 00:07:06.084 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 232632 ']' 00:07:06.084 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.084 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.084 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.084 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.084 02:47:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.344 02:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.344 02:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:06.344 02:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:06.344 02:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.344 02:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.344 02:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.344 00:07:06.344 real 0m1.961s 00:07:06.344 user 0m1.043s 00:07:06.344 sys 0m0.167s 00:07:06.344 02:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.344 02:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.344 ************************************ 00:07:06.344 END TEST locking_overlapped_coremask_via_rpc 00:07:06.344 ************************************ 00:07:06.344 02:47:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:06.344 02:47:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 232622 ]] 00:07:06.344 02:47:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 232622 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 232622 ']' 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 232622 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 232622 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 232622' 00:07:06.344 killing process with pid 232622 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 232622 00:07:06.344 02:47:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 232622 00:07:06.912 02:47:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 232632 ]] 00:07:06.912 02:47:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 232632 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 232632 ']' 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 232632 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 232632 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 232632' 00:07:06.912 killing process with pid 232632 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 232632 00:07:06.912 02:47:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 232632 00:07:07.171 02:47:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.171 02:47:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:07.171 02:47:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 232622 ]] 00:07:07.171 02:47:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 232622 00:07:07.171 02:47:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 232622 ']' 00:07:07.171 02:47:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 232622 00:07:07.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (232622) - No such process 00:07:07.171 02:47:57 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 232622 is not found' 00:07:07.171 Process with pid 232622 is not found 00:07:07.171 02:47:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 232632 ]] 00:07:07.171 02:47:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 232632 00:07:07.171 02:47:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 232632 ']' 00:07:07.171 02:47:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 232632 00:07:07.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (232632) - No such process 00:07:07.171 02:47:57 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 232632 is not found' 00:07:07.171 Process with pid 232632 is not found 00:07:07.171 02:47:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.171 00:07:07.171 real 0m15.497s 00:07:07.171 user 0m27.047s 00:07:07.171 sys 0m5.344s 00:07:07.171 02:47:57 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.171 02:47:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.171 ************************************ 00:07:07.171 END TEST cpu_locks 00:07:07.171 ************************************ 00:07:07.431 00:07:07.431 real 0m41.271s 00:07:07.431 user 1m18.682s 00:07:07.431 sys 0m9.567s 00:07:07.431 02:47:57 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.431 02:47:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.431 ************************************ 00:07:07.431 END TEST event 00:07:07.431 ************************************ 00:07:07.431 02:47:57 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:07.431 02:47:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.431 02:47:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.431 02:47:57 -- common/autotest_common.sh@10 -- # set +x 00:07:07.431 ************************************ 00:07:07.431 START TEST thread 00:07:07.431 ************************************ 00:07:07.431 02:47:58 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:07.431 * Looking for test storage... 00:07:07.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:07.431 02:47:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.431 02:47:58 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:07.431 02:47:58 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.431 02:47:58 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.431 ************************************ 00:07:07.431 START TEST thread_poller_perf 00:07:07.431 ************************************ 00:07:07.431 02:47:58 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.431 [2024-05-13 02:47:58.118820] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:07.431 [2024-05-13 02:47:58.118886] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233005 ] 00:07:07.431 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.431 [2024-05-13 02:47:58.151112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.431 [2024-05-13 02:47:58.182885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.690 [2024-05-13 02:47:58.278322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.690 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:08.625 ====================================== 00:07:08.625 busy:2714801826 (cyc) 00:07:08.625 total_run_count: 291000 00:07:08.625 tsc_hz: 2700000000 (cyc) 00:07:08.625 ====================================== 00:07:08.625 poller_cost: 9329 (cyc), 3455 (nsec) 00:07:08.625 00:07:08.625 real 0m1.266s 00:07:08.625 user 0m1.172s 00:07:08.625 sys 0m0.088s 00:07:08.625 02:47:59 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.625 02:47:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.625 ************************************ 00:07:08.625 END TEST thread_poller_perf 00:07:08.625 ************************************ 00:07:08.625 02:47:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.626 02:47:59 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:08.626 02:47:59 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.626 02:47:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.626 ************************************ 00:07:08.626 START TEST thread_poller_perf 00:07:08.626 ************************************ 00:07:08.626 02:47:59 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.885 [2024-05-13 02:47:59.438575] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:08.885 [2024-05-13 02:47:59.438643] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233159 ] 00:07:08.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.885 [2024-05-13 02:47:59.471804] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.885 [2024-05-13 02:47:59.504296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.885 [2024-05-13 02:47:59.596774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.885 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:10.262 ====================================== 00:07:10.262 busy:2702474484 (cyc) 00:07:10.262 total_run_count: 3935000 00:07:10.262 tsc_hz: 2700000000 (cyc) 00:07:10.262 ====================================== 00:07:10.262 poller_cost: 686 (cyc), 254 (nsec) 00:07:10.262 00:07:10.262 real 0m1.254s 00:07:10.262 user 0m1.161s 00:07:10.262 sys 0m0.086s 00:07:10.262 02:48:00 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.262 02:48:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.262 ************************************ 00:07:10.262 END TEST thread_poller_perf 00:07:10.262 ************************************ 00:07:10.262 02:48:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:10.262 00:07:10.262 real 0m2.671s 00:07:10.262 user 0m2.386s 00:07:10.262 sys 0m0.278s 00:07:10.262 02:48:00 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.262 02:48:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.262 ************************************ 00:07:10.262 END TEST thread 00:07:10.262 ************************************ 00:07:10.262 02:48:00 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:10.262 02:48:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.262 02:48:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.262 02:48:00 -- common/autotest_common.sh@10 -- # set +x 00:07:10.262 ************************************ 00:07:10.262 START TEST accel 00:07:10.262 ************************************ 00:07:10.262 02:48:00 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:10.262 * Looking for test storage... 00:07:10.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:10.262 02:48:00 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:10.262 02:48:00 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:10.262 02:48:00 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:10.262 02:48:00 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=233522 00:07:10.262 02:48:00 accel -- accel/accel.sh@63 -- # waitforlisten 233522 00:07:10.262 02:48:00 accel -- common/autotest_common.sh@827 -- # '[' -z 233522 ']' 00:07:10.262 02:48:00 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:10.262 02:48:00 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:10.262 02:48:00 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.262 02:48:00 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.262 02:48:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.262 02:48:00 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.263 02:48:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.263 02:48:00 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.263 02:48:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.263 02:48:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.263 02:48:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.263 02:48:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.263 02:48:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:10.263 02:48:00 accel -- accel/accel.sh@41 -- # jq -r . 00:07:10.263 [2024-05-13 02:48:00.847870] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:10.263 [2024-05-13 02:48:00.847993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233522 ] 00:07:10.263 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.263 [2024-05-13 02:48:00.879627] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.263 [2024-05-13 02:48:00.908068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.263 [2024-05-13 02:48:00.996435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@860 -- # return 0 00:07:10.521 02:48:01 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:10.521 02:48:01 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:10.521 02:48:01 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:10.521 02:48:01 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:10.521 02:48:01 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:10.521 02:48:01 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.521 02:48:01 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.521 02:48:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.521 02:48:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.521 02:48:01 accel -- accel/accel.sh@75 -- # killprocess 233522 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@946 -- # '[' -z 233522 ']' 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@950 -- # kill -0 233522 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@951 -- # uname 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 233522 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 233522' 00:07:10.521 killing process with pid 233522 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@965 -- # kill 233522 00:07:10.521 02:48:01 accel -- common/autotest_common.sh@970 -- # wait 233522 00:07:11.089 02:48:01 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:11.089 02:48:01 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:11.089 02:48:01 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:11.089 02:48:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.089 02:48:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.089 02:48:01 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:11.089 02:48:01 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:11.089 02:48:01 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:11.089 02:48:01 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.089 02:48:01 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.089 02:48:01 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.089 02:48:01 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.089 02:48:01 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.089 02:48:01 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:11.089 02:48:01 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:11.089 02:48:01 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.089 02:48:01 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:11.089 02:48:01 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:11.089 02:48:01 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:11.089 02:48:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.089 02:48:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.089 ************************************ 00:07:11.089 START TEST accel_missing_filename 00:07:11.089 ************************************ 00:07:11.089 02:48:01 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:11.089 02:48:01 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:11.089 02:48:01 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:11.089 02:48:01 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:11.089 02:48:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.089 02:48:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:11.089 02:48:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.089 02:48:01 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:11.089 02:48:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:11.089 02:48:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:11.089 02:48:01 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.089 02:48:01 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.089 02:48:01 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.089 02:48:01 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.089 02:48:01 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.089 02:48:01 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:11.089 02:48:01 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:11.089 [2024-05-13 02:48:01.820504] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:11.089 [2024-05-13 02:48:01.820572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233679 ] 00:07:11.089 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.089 [2024-05-13 02:48:01.855081] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.089 [2024-05-13 02:48:01.885779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.350 [2024-05-13 02:48:01.980940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.350 [2024-05-13 02:48:02.037302] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.350 [2024-05-13 02:48:02.120509] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:11.610 A filename is required. 00:07:11.610 02:48:02 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:11.610 02:48:02 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.610 02:48:02 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:11.610 02:48:02 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:11.610 02:48:02 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:11.610 02:48:02 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.610 00:07:11.610 real 0m0.401s 00:07:11.610 user 0m0.285s 00:07:11.610 sys 0m0.149s 00:07:11.610 02:48:02 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.610 02:48:02 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:11.610 ************************************ 00:07:11.610 END TEST accel_missing_filename 00:07:11.610 ************************************ 00:07:11.610 02:48:02 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.610 02:48:02 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:11.610 02:48:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.610 02:48:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.610 ************************************ 00:07:11.610 START TEST accel_compress_verify 00:07:11.610 ************************************ 00:07:11.610 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.610 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:11.610 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.610 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:11.610 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.610 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:11.610 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.610 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.610 02:48:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.610 02:48:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:11.610 02:48:02 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.610 02:48:02 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.610 02:48:02 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.610 02:48:02 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.610 02:48:02 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.610 02:48:02 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:11.610 02:48:02 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:11.610 [2024-05-13 02:48:02.268619] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:11.610 [2024-05-13 02:48:02.268679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233780 ] 00:07:11.610 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.610 [2024-05-13 02:48:02.301008] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.610 [2024-05-13 02:48:02.330951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.871 [2024-05-13 02:48:02.424994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.871 [2024-05-13 02:48:02.487034] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.871 [2024-05-13 02:48:02.571567] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:11.871 00:07:11.871 Compression does not support the verify option, aborting. 00:07:11.871 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:11.871 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.871 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:11.871 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:11.871 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:11.871 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.871 00:07:11.871 real 0m0.400s 00:07:11.871 user 0m0.288s 00:07:11.871 sys 0m0.144s 00:07:11.871 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.871 02:48:02 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:11.871 ************************************ 00:07:11.871 END TEST accel_compress_verify 00:07:11.871 ************************************ 00:07:11.871 02:48:02 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:11.871 02:48:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:11.871 02:48:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.871 02:48:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.131 ************************************ 00:07:12.131 START TEST accel_wrong_workload 00:07:12.131 ************************************ 00:07:12.131 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:12.131 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:12.131 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:12.131 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:12.131 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.131 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:12.131 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.131 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:12.131 02:48:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:12.131 02:48:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:12.131 02:48:02 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.131 02:48:02 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.131 02:48:02 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.131 02:48:02 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.131 02:48:02 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.131 02:48:02 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:12.131 02:48:02 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:12.131 Unsupported workload type: foobar 00:07:12.131 [2024-05-13 02:48:02.715265] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:12.131 accel_perf options: 00:07:12.131 [-h help message] 00:07:12.131 [-q queue depth per core] 00:07:12.131 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:12.131 [-T number of threads per core 00:07:12.131 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:12.131 [-t time in seconds] 00:07:12.131 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:12.131 [ dif_verify, , dif_generate, dif_generate_copy 00:07:12.131 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:12.131 [-l for compress/decompress workloads, name of uncompressed input file 00:07:12.131 [-S for crc32c workload, use this seed value (default 0) 00:07:12.131 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:12.131 [-f for fill workload, use this BYTE value (default 255) 00:07:12.131 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:12.132 [-y verify result if this switch is on] 00:07:12.132 [-a tasks to allocate per core (default: same value as -q)] 00:07:12.132 Can be used to spread operations across a wider range of memory. 00:07:12.132 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:12.132 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:12.132 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:12.132 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:12.132 00:07:12.132 real 0m0.021s 00:07:12.132 user 0m0.011s 00:07:12.132 sys 0m0.010s 00:07:12.132 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.132 02:48:02 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:12.132 ************************************ 00:07:12.132 END TEST accel_wrong_workload 00:07:12.132 ************************************ 00:07:12.132 02:48:02 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:12.132 02:48:02 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:12.132 02:48:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.132 Error: writing output failed: Broken pipe 00:07:12.132 02:48:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.132 ************************************ 00:07:12.132 START TEST accel_negative_buffers 00:07:12.132 ************************************ 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:12.132 02:48:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:12.132 02:48:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:12.132 02:48:02 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.132 02:48:02 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.132 02:48:02 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.132 02:48:02 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.132 02:48:02 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.132 02:48:02 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:12.132 02:48:02 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:12.132 -x option must be non-negative. 00:07:12.132 [2024-05-13 02:48:02.781256] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:12.132 accel_perf options: 00:07:12.132 [-h help message] 00:07:12.132 [-q queue depth per core] 00:07:12.132 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:12.132 [-T number of threads per core 00:07:12.132 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:12.132 [-t time in seconds] 00:07:12.132 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:12.132 [ dif_verify, , dif_generate, dif_generate_copy 00:07:12.132 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:12.132 [-l for compress/decompress workloads, name of uncompressed input file 00:07:12.132 [-S for crc32c workload, use this seed value (default 0) 00:07:12.132 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:12.132 [-f for fill workload, use this BYTE value (default 255) 00:07:12.132 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:12.132 [-y verify result if this switch is on] 00:07:12.132 [-a tasks to allocate per core (default: same value as -q)] 00:07:12.132 Can be used to spread operations across a wider range of memory. 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:12.132 00:07:12.132 real 0m0.021s 00:07:12.132 user 0m0.012s 00:07:12.132 sys 0m0.009s 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.132 02:48:02 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:12.132 ************************************ 00:07:12.132 END TEST accel_negative_buffers 00:07:12.132 ************************************ 00:07:12.132 Error: writing output failed: Broken pipe 00:07:12.132 02:48:02 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:12.132 02:48:02 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:12.132 02:48:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.132 02:48:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.132 ************************************ 00:07:12.132 START TEST accel_crc32c 00:07:12.132 ************************************ 00:07:12.132 02:48:02 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:12.132 02:48:02 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:12.132 [2024-05-13 02:48:02.856234] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:12.132 [2024-05-13 02:48:02.856299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233844 ] 00:07:12.132 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.132 [2024-05-13 02:48:02.889220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.132 [2024-05-13 02:48:02.921404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.390 [2024-05-13 02:48:03.015333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.390 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.390 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.390 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.390 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.390 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.390 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.390 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.390 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.391 02:48:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:13.771 02:48:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.771 00:07:13.771 real 0m1.410s 00:07:13.771 user 0m1.258s 00:07:13.771 sys 0m0.152s 00:07:13.771 02:48:04 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.771 02:48:04 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:13.771 ************************************ 00:07:13.771 END TEST accel_crc32c 00:07:13.771 ************************************ 00:07:13.771 02:48:04 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:13.771 02:48:04 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:13.771 02:48:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.771 02:48:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.771 ************************************ 00:07:13.771 START TEST accel_crc32c_C2 00:07:13.771 ************************************ 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:13.771 [2024-05-13 02:48:04.317316] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:13.771 [2024-05-13 02:48:04.317381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234122 ] 00:07:13.771 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.771 [2024-05-13 02:48:04.348900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.771 [2024-05-13 02:48:04.377711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.771 [2024-05-13 02:48:04.471226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:13.771 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.772 02:48:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.149 00:07:15.149 real 0m1.395s 00:07:15.149 user 0m1.259s 00:07:15.149 sys 0m0.136s 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.149 02:48:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:15.149 ************************************ 00:07:15.149 END TEST accel_crc32c_C2 00:07:15.149 ************************************ 00:07:15.149 02:48:05 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:15.149 02:48:05 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:15.149 02:48:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.149 02:48:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.149 ************************************ 00:07:15.149 START TEST accel_copy 00:07:15.149 ************************************ 00:07:15.149 02:48:05 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:15.149 02:48:05 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:15.149 [2024-05-13 02:48:05.760609] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:15.149 [2024-05-13 02:48:05.760672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234274 ] 00:07:15.149 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.149 [2024-05-13 02:48:05.792813] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.149 [2024-05-13 02:48:05.822930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.149 [2024-05-13 02:48:05.913399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.448 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.448 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.448 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.449 02:48:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:16.391 02:48:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.391 00:07:16.391 real 0m1.396s 00:07:16.391 user 0m1.256s 00:07:16.391 sys 0m0.140s 00:07:16.391 02:48:07 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.391 02:48:07 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.391 ************************************ 00:07:16.391 END TEST accel_copy 00:07:16.391 ************************************ 00:07:16.391 02:48:07 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.391 02:48:07 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:16.391 02:48:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.391 02:48:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.391 ************************************ 00:07:16.391 START TEST accel_fill 00:07:16.391 ************************************ 00:07:16.391 02:48:07 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.391 02:48:07 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:16.391 02:48:07 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:16.391 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.391 02:48:07 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.391 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.650 02:48:07 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.650 02:48:07 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:16.650 02:48:07 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.650 02:48:07 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.650 02:48:07 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.650 02:48:07 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.650 02:48:07 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.650 02:48:07 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:16.651 [2024-05-13 02:48:07.208707] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:16.651 [2024-05-13 02:48:07.208786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234502 ] 00:07:16.651 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.651 [2024-05-13 02:48:07.241132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.651 [2024-05-13 02:48:07.270609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.651 [2024-05-13 02:48:07.361904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 02:48:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:18.031 02:48:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.031 00:07:18.031 real 0m1.397s 00:07:18.031 user 0m1.262s 00:07:18.031 sys 0m0.136s 00:07:18.031 02:48:08 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.031 02:48:08 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:18.031 ************************************ 00:07:18.031 END TEST accel_fill 00:07:18.031 ************************************ 00:07:18.031 02:48:08 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:18.031 02:48:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:18.031 02:48:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.031 02:48:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.032 ************************************ 00:07:18.032 START TEST accel_copy_crc32c 00:07:18.032 ************************************ 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:18.032 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:18.032 [2024-05-13 02:48:08.652131] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:18.032 [2024-05-13 02:48:08.652213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235079 ] 00:07:18.032 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.032 [2024-05-13 02:48:08.685315] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.032 [2024-05-13 02:48:08.715560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.032 [2024-05-13 02:48:08.808885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.291 02:48:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.228 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.228 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.228 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.228 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.228 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.228 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.228 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.488 00:07:19.488 real 0m1.398s 00:07:19.488 user 0m1.260s 00:07:19.488 sys 0m0.139s 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.488 02:48:10 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:19.488 ************************************ 00:07:19.488 END TEST accel_copy_crc32c 00:07:19.488 ************************************ 00:07:19.488 02:48:10 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:19.488 02:48:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:19.488 02:48:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.488 02:48:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.488 ************************************ 00:07:19.488 START TEST accel_copy_crc32c_C2 00:07:19.488 ************************************ 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.488 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:19.488 [2024-05-13 02:48:10.103364] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:19.488 [2024-05-13 02:48:10.103430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235368 ] 00:07:19.488 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.488 [2024-05-13 02:48:10.136583] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.488 [2024-05-13 02:48:10.167154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.488 [2024-05-13 02:48:10.261868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.747 02:48:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.125 00:07:21.125 real 0m1.416s 00:07:21.125 user 0m1.268s 00:07:21.125 sys 0m0.151s 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.125 02:48:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:21.125 ************************************ 00:07:21.125 END TEST accel_copy_crc32c_C2 00:07:21.125 ************************************ 00:07:21.125 02:48:11 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:21.125 02:48:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:21.125 02:48:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.125 02:48:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.125 ************************************ 00:07:21.125 START TEST accel_dualcast 00:07:21.125 ************************************ 00:07:21.125 02:48:11 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:21.125 02:48:11 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:21.125 [2024-05-13 02:48:11.571588] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:21.125 [2024-05-13 02:48:11.571651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235528 ] 00:07:21.125 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.125 [2024-05-13 02:48:11.604007] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.125 [2024-05-13 02:48:11.633596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.126 [2024-05-13 02:48:11.726959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.126 02:48:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:22.503 02:48:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.503 00:07:22.503 real 0m1.407s 00:07:22.503 user 0m1.270s 00:07:22.503 sys 0m0.139s 00:07:22.503 02:48:12 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.503 02:48:12 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:22.503 ************************************ 00:07:22.503 END TEST accel_dualcast 00:07:22.503 ************************************ 00:07:22.503 02:48:12 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:22.503 02:48:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:22.503 02:48:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.503 02:48:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.503 ************************************ 00:07:22.503 START TEST accel_compare 00:07:22.503 ************************************ 00:07:22.503 02:48:13 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:22.503 [2024-05-13 02:48:13.029665] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:22.503 [2024-05-13 02:48:13.029755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235679 ] 00:07:22.503 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.503 [2024-05-13 02:48:13.063416] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.503 [2024-05-13 02:48:13.096901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.503 [2024-05-13 02:48:13.189165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.503 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.504 02:48:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:23.880 02:48:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.880 00:07:23.880 real 0m1.400s 00:07:23.880 user 0m1.258s 00:07:23.880 sys 0m0.145s 00:07:23.880 02:48:14 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.880 02:48:14 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:23.880 ************************************ 00:07:23.880 END TEST accel_compare 00:07:23.880 ************************************ 00:07:23.880 02:48:14 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:23.881 02:48:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:23.881 02:48:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.881 02:48:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.881 ************************************ 00:07:23.881 START TEST accel_xor 00:07:23.881 ************************************ 00:07:23.881 02:48:14 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:23.881 02:48:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:23.881 [2024-05-13 02:48:14.479557] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:23.881 [2024-05-13 02:48:14.479620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235957 ] 00:07:23.881 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.881 [2024-05-13 02:48:14.512535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.881 [2024-05-13 02:48:14.542803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.881 [2024-05-13 02:48:14.633137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.139 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.140 02:48:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:25.076 02:48:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.076 00:07:25.076 real 0m1.400s 00:07:25.076 user 0m1.266s 00:07:25.076 sys 0m0.135s 00:07:25.076 02:48:15 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.076 02:48:15 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:25.076 ************************************ 00:07:25.076 END TEST accel_xor 00:07:25.076 ************************************ 00:07:25.335 02:48:15 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:25.335 02:48:15 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:25.335 02:48:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.335 02:48:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.335 ************************************ 00:07:25.335 START TEST accel_xor 00:07:25.335 ************************************ 00:07:25.335 02:48:15 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:25.335 02:48:15 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:25.335 [2024-05-13 02:48:15.942834] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:25.335 [2024-05-13 02:48:15.942896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236117 ] 00:07:25.335 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.335 [2024-05-13 02:48:15.973178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.335 [2024-05-13 02:48:16.005384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.335 [2024-05-13 02:48:16.098836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.594 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.595 02:48:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.969 02:48:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.970 02:48:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:26.970 02:48:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.970 00:07:26.970 real 0m1.416s 00:07:26.970 user 0m1.264s 00:07:26.970 sys 0m0.153s 00:07:26.970 02:48:17 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.970 02:48:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:26.970 ************************************ 00:07:26.970 END TEST accel_xor 00:07:26.970 ************************************ 00:07:26.970 02:48:17 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:26.970 02:48:17 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:26.970 02:48:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.970 02:48:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.970 ************************************ 00:07:26.970 START TEST accel_dif_verify 00:07:26.970 ************************************ 00:07:26.970 02:48:17 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:26.970 [2024-05-13 02:48:17.416028] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:26.970 [2024-05-13 02:48:17.416095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236276 ] 00:07:26.970 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.970 [2024-05-13 02:48:17.448115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.970 [2024-05-13 02:48:17.479958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.970 [2024-05-13 02:48:17.571456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.970 02:48:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:28.345 02:48:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.345 00:07:28.345 real 0m1.403s 00:07:28.345 user 0m1.260s 00:07:28.345 sys 0m0.146s 00:07:28.345 02:48:18 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.345 02:48:18 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:28.345 ************************************ 00:07:28.345 END TEST accel_dif_verify 00:07:28.345 ************************************ 00:07:28.345 02:48:18 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:28.345 02:48:18 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:28.345 02:48:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.345 02:48:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.345 ************************************ 00:07:28.345 START TEST accel_dif_generate 00:07:28.345 ************************************ 00:07:28.345 02:48:18 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:28.345 02:48:18 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:28.345 [2024-05-13 02:48:18.869328] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:28.345 [2024-05-13 02:48:18.869391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236510 ] 00:07:28.346 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.346 [2024-05-13 02:48:18.902705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.346 [2024-05-13 02:48:18.932599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.346 [2024-05-13 02:48:19.025938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.346 02:48:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:29.719 02:48:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.719 00:07:29.719 real 0m1.393s 00:07:29.719 user 0m1.256s 00:07:29.719 sys 0m0.140s 00:07:29.719 02:48:20 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.719 02:48:20 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:29.719 ************************************ 00:07:29.719 END TEST accel_dif_generate 00:07:29.719 ************************************ 00:07:29.719 02:48:20 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:29.719 02:48:20 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:29.719 02:48:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.719 02:48:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.719 ************************************ 00:07:29.719 START TEST accel_dif_generate_copy 00:07:29.719 ************************************ 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:29.719 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:29.719 [2024-05-13 02:48:20.311842] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:29.719 [2024-05-13 02:48:20.311898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236708 ] 00:07:29.719 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.719 [2024-05-13 02:48:20.344129] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.719 [2024-05-13 02:48:20.374134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.719 [2024-05-13 02:48:20.465449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.977 02:48:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.944 00:07:30.944 real 0m1.400s 00:07:30.944 user 0m1.264s 00:07:30.944 sys 0m0.139s 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.944 02:48:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.944 ************************************ 00:07:30.944 END TEST accel_dif_generate_copy 00:07:30.944 ************************************ 00:07:30.944 02:48:21 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:30.944 02:48:21 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.944 02:48:21 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:30.944 02:48:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.944 02:48:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.204 ************************************ 00:07:31.204 START TEST accel_comp 00:07:31.204 ************************************ 00:07:31.204 02:48:21 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:31.204 [2024-05-13 02:48:21.763502] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:31.204 [2024-05-13 02:48:21.763564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236864 ] 00:07:31.204 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.204 [2024-05-13 02:48:21.795394] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.204 [2024-05-13 02:48:21.825469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.204 [2024-05-13 02:48:21.919301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.204 02:48:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:32.580 02:48:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.580 00:07:32.580 real 0m1.407s 00:07:32.580 user 0m1.266s 00:07:32.580 sys 0m0.144s 00:07:32.580 02:48:23 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.580 02:48:23 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:32.580 ************************************ 00:07:32.580 END TEST accel_comp 00:07:32.580 ************************************ 00:07:32.580 02:48:23 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:32.580 02:48:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:32.580 02:48:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.580 02:48:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.580 ************************************ 00:07:32.580 START TEST accel_decomp 00:07:32.580 ************************************ 00:07:32.580 02:48:23 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:32.580 02:48:23 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:32.580 [2024-05-13 02:48:23.222011] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:32.580 [2024-05-13 02:48:23.222072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237018 ] 00:07:32.580 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.580 [2024-05-13 02:48:23.255223] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.580 [2024-05-13 02:48:23.285495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.580 [2024-05-13 02:48:23.379297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.839 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.840 02:48:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.218 02:48:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.218 00:07:34.218 real 0m1.397s 00:07:34.218 user 0m1.252s 00:07:34.218 sys 0m0.148s 00:07:34.218 02:48:24 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.218 02:48:24 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:34.218 ************************************ 00:07:34.218 END TEST accel_decomp 00:07:34.218 ************************************ 00:07:34.218 02:48:24 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:34.218 02:48:24 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:34.218 02:48:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.218 02:48:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.218 ************************************ 00:07:34.218 START TEST accel_decmop_full 00:07:34.218 ************************************ 00:07:34.218 02:48:24 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:34.218 [2024-05-13 02:48:24.665138] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:34.218 [2024-05-13 02:48:24.665200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237290 ] 00:07:34.218 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.218 [2024-05-13 02:48:24.697140] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.218 [2024-05-13 02:48:24.727026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.218 [2024-05-13 02:48:24.819267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.218 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.219 02:48:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.594 02:48:26 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.594 00:07:35.594 real 0m1.408s 00:07:35.594 user 0m1.262s 00:07:35.594 sys 0m0.148s 00:07:35.594 02:48:26 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.594 02:48:26 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:35.594 ************************************ 00:07:35.594 END TEST accel_decmop_full 00:07:35.594 ************************************ 00:07:35.594 02:48:26 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.595 02:48:26 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:35.595 02:48:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.595 02:48:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.595 ************************************ 00:07:35.595 START TEST accel_decomp_mcore 00:07:35.595 ************************************ 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:35.595 [2024-05-13 02:48:26.125639] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:35.595 [2024-05-13 02:48:26.125722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237452 ] 00:07:35.595 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.595 [2024-05-13 02:48:26.160158] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.595 [2024-05-13 02:48:26.189285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.595 [2024-05-13 02:48:26.285800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.595 [2024-05-13 02:48:26.285871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.595 [2024-05-13 02:48:26.285972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.595 [2024-05-13 02:48:26.285975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.595 02:48:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.968 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.969 00:07:36.969 real 0m1.420s 00:07:36.969 user 0m4.707s 00:07:36.969 sys 0m0.162s 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.969 02:48:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:36.969 ************************************ 00:07:36.969 END TEST accel_decomp_mcore 00:07:36.969 ************************************ 00:07:36.969 02:48:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.969 02:48:27 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:36.969 02:48:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.969 02:48:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.969 ************************************ 00:07:36.969 START TEST accel_decomp_full_mcore 00:07:36.969 ************************************ 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:36.969 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:36.969 [2024-05-13 02:48:27.601959] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:36.969 [2024-05-13 02:48:27.602024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237613 ] 00:07:36.969 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.969 [2024-05-13 02:48:27.634447] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.969 [2024-05-13 02:48:27.665264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.969 [2024-05-13 02:48:27.759768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.969 [2024-05-13 02:48:27.759826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.969 [2024-05-13 02:48:27.759943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.969 [2024-05-13 02:48:27.759945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.228 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.229 02:48:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.603 00:07:38.603 real 0m1.430s 00:07:38.603 user 0m4.757s 00:07:38.603 sys 0m0.160s 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.603 02:48:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:38.603 ************************************ 00:07:38.603 END TEST accel_decomp_full_mcore 00:07:38.603 ************************************ 00:07:38.603 02:48:29 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.603 02:48:29 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:38.603 02:48:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.603 02:48:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.603 ************************************ 00:07:38.603 START TEST accel_decomp_mthread 00:07:38.603 ************************************ 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:38.603 [2024-05-13 02:48:29.079787] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:38.603 [2024-05-13 02:48:29.079848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237796 ] 00:07:38.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.603 [2024-05-13 02:48:29.112199] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.603 [2024-05-13 02:48:29.142043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.603 [2024-05-13 02:48:29.236016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.603 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.604 02:48:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.979 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.980 00:07:39.980 real 0m1.420s 00:07:39.980 user 0m1.279s 00:07:39.980 sys 0m0.143s 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.980 02:48:30 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:39.980 ************************************ 00:07:39.980 END TEST accel_decomp_mthread 00:07:39.980 ************************************ 00:07:39.980 02:48:30 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.980 02:48:30 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:39.980 02:48:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.980 02:48:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.980 ************************************ 00:07:39.980 START TEST accel_decomp_full_mthread 00:07:39.980 ************************************ 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:39.980 [2024-05-13 02:48:30.552583] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:39.980 [2024-05-13 02:48:30.552650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238045 ] 00:07:39.980 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.980 [2024-05-13 02:48:30.584252] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.980 [2024-05-13 02:48:30.615865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.980 [2024-05-13 02:48:30.705553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.980 02:48:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.354 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.354 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.354 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.355 00:07:41.355 real 0m1.430s 00:07:41.355 user 0m1.291s 00:07:41.355 sys 0m0.142s 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.355 02:48:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:41.355 ************************************ 00:07:41.355 END TEST accel_decomp_full_mthread 00:07:41.355 ************************************ 00:07:41.355 02:48:31 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:41.355 02:48:31 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.355 02:48:31 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:41.355 02:48:31 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.355 02:48:31 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:41.355 02:48:31 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.355 02:48:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.355 02:48:31 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.355 02:48:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.355 02:48:31 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.355 02:48:31 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.355 02:48:31 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:41.355 02:48:31 accel -- accel/accel.sh@41 -- # jq -r . 00:07:41.355 ************************************ 00:07:41.355 START TEST accel_dif_functional_tests 00:07:41.355 ************************************ 00:07:41.355 02:48:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.355 [2024-05-13 02:48:32.055625] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:41.355 [2024-05-13 02:48:32.055709] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238205 ] 00:07:41.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.355 [2024-05-13 02:48:32.087063] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.355 [2024-05-13 02:48:32.117165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.614 [2024-05-13 02:48:32.213228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.614 [2024-05-13 02:48:32.213286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.614 [2024-05-13 02:48:32.213290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.614 00:07:41.614 00:07:41.614 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.614 http://cunit.sourceforge.net/ 00:07:41.614 00:07:41.614 00:07:41.614 Suite: accel_dif 00:07:41.614 Test: verify: DIF generated, GUARD check ...passed 00:07:41.614 Test: verify: DIF generated, APPTAG check ...passed 00:07:41.614 Test: verify: DIF generated, REFTAG check ...passed 00:07:41.614 Test: verify: DIF not generated, GUARD check ...[2024-05-13 02:48:32.306907] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:41.614 [2024-05-13 02:48:32.306964] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:41.614 passed 00:07:41.614 Test: verify: DIF not generated, APPTAG check ...[2024-05-13 02:48:32.306999] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:41.614 [2024-05-13 02:48:32.307025] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:41.614 passed 00:07:41.614 Test: verify: DIF not generated, REFTAG check ...[2024-05-13 02:48:32.307071] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:41.614 [2024-05-13 02:48:32.307098] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:41.614 passed 00:07:41.614 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:41.614 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-13 02:48:32.307158] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:41.614 passed 00:07:41.614 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:41.614 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:41.614 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:41.614 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-13 02:48:32.307300] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:41.614 passed 00:07:41.614 Test: generate copy: DIF generated, GUARD check ...passed 00:07:41.614 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:41.614 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:41.614 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:41.614 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:41.614 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:41.614 Test: generate copy: iovecs-len validate ...[2024-05-13 02:48:32.307548] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:41.614 passed 00:07:41.614 Test: generate copy: buffer alignment validate ...passed 00:07:41.614 00:07:41.614 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.614 suites 1 1 n/a 0 0 00:07:41.614 tests 20 20 20 0 0 00:07:41.614 asserts 204 204 204 0 n/a 00:07:41.614 00:07:41.614 Elapsed time = 0.001 seconds 00:07:41.873 00:07:41.873 real 0m0.501s 00:07:41.873 user 0m0.794s 00:07:41.873 sys 0m0.171s 00:07:41.873 02:48:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.873 02:48:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:41.873 ************************************ 00:07:41.873 END TEST accel_dif_functional_tests 00:07:41.873 ************************************ 00:07:41.873 00:07:41.873 real 0m31.788s 00:07:41.873 user 0m35.100s 00:07:41.873 sys 0m4.626s 00:07:41.873 02:48:32 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.873 02:48:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.873 ************************************ 00:07:41.873 END TEST accel 00:07:41.873 ************************************ 00:07:41.873 02:48:32 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:41.873 02:48:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:41.873 02:48:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.873 02:48:32 -- common/autotest_common.sh@10 -- # set +x 00:07:41.873 ************************************ 00:07:41.873 START TEST accel_rpc 00:07:41.873 ************************************ 00:07:41.873 02:48:32 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:41.873 * Looking for test storage... 00:07:41.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:41.873 02:48:32 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:41.873 02:48:32 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=238388 00:07:41.873 02:48:32 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:41.873 02:48:32 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 238388 00:07:41.873 02:48:32 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 238388 ']' 00:07:41.873 02:48:32 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.873 02:48:32 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:41.873 02:48:32 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.873 02:48:32 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:41.873 02:48:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.132 [2024-05-13 02:48:32.693472] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:42.132 [2024-05-13 02:48:32.693563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238388 ] 00:07:42.132 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.132 [2024-05-13 02:48:32.725136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.132 [2024-05-13 02:48:32.750511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.132 [2024-05-13 02:48:32.835782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.132 02:48:32 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.132 02:48:32 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:42.132 02:48:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:42.132 02:48:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:42.132 02:48:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:42.132 02:48:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:42.132 02:48:32 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:42.132 02:48:32 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:42.132 02:48:32 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.132 02:48:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.132 ************************************ 00:07:42.132 START TEST accel_assign_opcode 00:07:42.132 ************************************ 00:07:42.132 02:48:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:42.132 02:48:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:42.132 02:48:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.132 02:48:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.132 [2024-05-13 02:48:32.928513] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:42.132 02:48:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.132 02:48:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:42.132 02:48:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.132 02:48:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.391 [2024-05-13 02:48:32.936518] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:42.391 02:48:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.391 02:48:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:42.391 02:48:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.391 02:48:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.391 02:48:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.391 02:48:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:42.391 02:48:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.391 02:48:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.391 02:48:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:42.391 02:48:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:42.391 02:48:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.649 software 00:07:42.649 00:07:42.649 real 0m0.285s 00:07:42.649 user 0m0.038s 00:07:42.649 sys 0m0.009s 00:07:42.649 02:48:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.649 02:48:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.649 ************************************ 00:07:42.649 END TEST accel_assign_opcode 00:07:42.649 ************************************ 00:07:42.649 02:48:33 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 238388 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 238388 ']' 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 238388 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 238388 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 238388' 00:07:42.649 killing process with pid 238388 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@965 -- # kill 238388 00:07:42.649 02:48:33 accel_rpc -- common/autotest_common.sh@970 -- # wait 238388 00:07:42.908 00:07:42.908 real 0m1.079s 00:07:42.908 user 0m1.047s 00:07:42.908 sys 0m0.391s 00:07:42.908 02:48:33 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.908 02:48:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.908 ************************************ 00:07:42.908 END TEST accel_rpc 00:07:42.908 ************************************ 00:07:42.908 02:48:33 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:42.908 02:48:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:42.908 02:48:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.908 02:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:43.166 ************************************ 00:07:43.166 START TEST app_cmdline 00:07:43.166 ************************************ 00:07:43.166 02:48:33 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:43.166 * Looking for test storage... 00:07:43.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:43.166 02:48:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:43.166 02:48:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=238592 00:07:43.166 02:48:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:43.166 02:48:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 238592 00:07:43.166 02:48:33 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 238592 ']' 00:07:43.166 02:48:33 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.166 02:48:33 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:43.166 02:48:33 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.166 02:48:33 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:43.166 02:48:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.166 [2024-05-13 02:48:33.829251] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:43.166 [2024-05-13 02:48:33.829343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238592 ] 00:07:43.166 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.166 [2024-05-13 02:48:33.861366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.166 [2024-05-13 02:48:33.892204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.425 [2024-05-13 02:48:33.984170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.683 02:48:34 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:43.683 02:48:34 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:43.683 02:48:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:43.683 { 00:07:43.683 "version": "SPDK v24.05-pre git sha1 dafdb289f", 00:07:43.683 "fields": { 00:07:43.683 "major": 24, 00:07:43.683 "minor": 5, 00:07:43.683 "patch": 0, 00:07:43.683 "suffix": "-pre", 00:07:43.683 "commit": "dafdb289f" 00:07:43.683 } 00:07:43.683 } 00:07:43.683 02:48:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:43.683 02:48:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:43.683 02:48:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:43.683 02:48:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:43.683 02:48:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:43.683 02:48:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.684 02:48:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.684 02:48:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:43.684 02:48:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:43.684 02:48:34 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.941 02:48:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:43.941 02:48:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:43.941 02:48:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.941 02:48:34 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:43.942 02:48:34 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.942 request: 00:07:43.942 { 00:07:43.942 "method": "env_dpdk_get_mem_stats", 00:07:43.942 "req_id": 1 00:07:43.942 } 00:07:43.942 Got JSON-RPC error response 00:07:43.942 response: 00:07:43.942 { 00:07:43.942 "code": -32601, 00:07:43.942 "message": "Method not found" 00:07:43.942 } 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.199 02:48:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 238592 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 238592 ']' 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 238592 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 238592 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 238592' 00:07:44.199 killing process with pid 238592 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@965 -- # kill 238592 00:07:44.199 02:48:34 app_cmdline -- common/autotest_common.sh@970 -- # wait 238592 00:07:44.457 00:07:44.457 real 0m1.451s 00:07:44.457 user 0m1.791s 00:07:44.457 sys 0m0.448s 00:07:44.457 02:48:35 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.457 02:48:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.457 ************************************ 00:07:44.457 END TEST app_cmdline 00:07:44.457 ************************************ 00:07:44.457 02:48:35 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:44.457 02:48:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:44.457 02:48:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.457 02:48:35 -- common/autotest_common.sh@10 -- # set +x 00:07:44.457 ************************************ 00:07:44.457 START TEST version 00:07:44.457 ************************************ 00:07:44.457 02:48:35 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:44.716 * Looking for test storage... 00:07:44.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:44.716 02:48:35 version -- app/version.sh@17 -- # get_header_version major 00:07:44.716 02:48:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:44.716 02:48:35 version -- app/version.sh@14 -- # cut -f2 00:07:44.716 02:48:35 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.716 02:48:35 version -- app/version.sh@17 -- # major=24 00:07:44.716 02:48:35 version -- app/version.sh@18 -- # get_header_version minor 00:07:44.716 02:48:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:44.716 02:48:35 version -- app/version.sh@14 -- # cut -f2 00:07:44.716 02:48:35 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.716 02:48:35 version -- app/version.sh@18 -- # minor=5 00:07:44.716 02:48:35 version -- app/version.sh@19 -- # get_header_version patch 00:07:44.716 02:48:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:44.716 02:48:35 version -- app/version.sh@14 -- # cut -f2 00:07:44.716 02:48:35 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.716 02:48:35 version -- app/version.sh@19 -- # patch=0 00:07:44.716 02:48:35 version -- app/version.sh@20 -- # get_header_version suffix 00:07:44.716 02:48:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:44.716 02:48:35 version -- app/version.sh@14 -- # cut -f2 00:07:44.716 02:48:35 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.716 02:48:35 version -- app/version.sh@20 -- # suffix=-pre 00:07:44.716 02:48:35 version -- app/version.sh@22 -- # version=24.5 00:07:44.716 02:48:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:44.716 02:48:35 version -- app/version.sh@28 -- # version=24.5rc0 00:07:44.716 02:48:35 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:44.716 02:48:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:44.716 02:48:35 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:44.716 02:48:35 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:44.716 00:07:44.716 real 0m0.098s 00:07:44.716 user 0m0.051s 00:07:44.716 sys 0m0.067s 00:07:44.716 02:48:35 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.716 02:48:35 version -- common/autotest_common.sh@10 -- # set +x 00:07:44.716 ************************************ 00:07:44.716 END TEST version 00:07:44.716 ************************************ 00:07:44.716 02:48:35 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:44.716 02:48:35 -- spdk/autotest.sh@194 -- # uname -s 00:07:44.716 02:48:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:44.716 02:48:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:44.716 02:48:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:44.716 02:48:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:44.716 02:48:35 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:44.716 02:48:35 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:44.716 02:48:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.716 02:48:35 -- common/autotest_common.sh@10 -- # set +x 00:07:44.716 02:48:35 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:44.716 02:48:35 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:44.716 02:48:35 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:07:44.716 02:48:35 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:07:44.716 02:48:35 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:07:44.716 02:48:35 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:07:44.716 02:48:35 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:44.716 02:48:35 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:44.716 02:48:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.716 02:48:35 -- common/autotest_common.sh@10 -- # set +x 00:07:44.716 ************************************ 00:07:44.716 START TEST nvmf_tcp 00:07:44.716 ************************************ 00:07:44.717 02:48:35 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:44.717 * Looking for test storage... 00:07:44.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.717 02:48:35 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.717 02:48:35 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.717 02:48:35 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.717 02:48:35 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.717 02:48:35 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.717 02:48:35 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.717 02:48:35 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:44.717 02:48:35 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:44.717 02:48:35 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:44.717 02:48:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:44.717 02:48:35 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:44.717 02:48:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:44.717 02:48:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.717 02:48:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:44.717 ************************************ 00:07:44.717 START TEST nvmf_example 00:07:44.717 ************************************ 00:07:44.717 02:48:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:44.977 * Looking for test storage... 00:07:44.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:44.977 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:44.978 02:48:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:46.920 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:46.920 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:46.920 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:46.920 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:46.920 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:46.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:07:46.921 00:07:46.921 --- 10.0.0.2 ping statistics --- 00:07:46.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.921 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:07:46.921 00:07:46.921 --- 10.0.0.1 ping statistics --- 00:07:46.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.921 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=240500 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 240500 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 240500 ']' 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.921 02:48:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.854 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.855 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.855 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.855 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.855 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:48.112 02:48:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.112 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:48.112 02:48:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:48.112 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.080 Initializing NVMe Controllers 00:07:58.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:58.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:58.080 Initialization complete. Launching workers. 00:07:58.080 ======================================================== 00:07:58.080 Latency(us) 00:07:58.080 Device Information : IOPS MiB/s Average min max 00:07:58.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12761.44 49.85 5014.76 681.79 19090.52 00:07:58.080 ======================================================== 00:07:58.080 Total : 12761.44 49.85 5014.76 681.79 19090.52 00:07:58.080 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.080 rmmod nvme_tcp 00:07:58.080 rmmod nvme_fabrics 00:07:58.080 rmmod nvme_keyring 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 240500 ']' 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 240500 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 240500 ']' 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 240500 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:58.080 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 240500 00:07:58.338 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:58.338 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:58.338 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 240500' 00:07:58.338 killing process with pid 240500 00:07:58.338 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 240500 00:07:58.338 02:48:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 240500 00:07:58.338 nvmf threads initialize successfully 00:07:58.338 bdev subsystem init successfully 00:07:58.338 created a nvmf target service 00:07:58.338 create targets's poll groups done 00:07:58.338 all subsystems of target started 00:07:58.338 nvmf target is running 00:07:58.338 all subsystems of target stopped 00:07:58.338 destroy targets's poll groups done 00:07:58.338 destroyed the nvmf target service 00:07:58.339 bdev subsystem finish successfully 00:07:58.339 nvmf threads destroy successfully 00:07:58.339 02:48:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.339 02:48:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.339 02:48:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.339 02:48:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.339 02:48:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.339 02:48:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.339 02:48:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.339 02:48:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.877 02:48:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:00.877 02:48:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:00.877 02:48:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.877 02:48:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:00.877 00:08:00.877 real 0m15.666s 00:08:00.877 user 0m38.313s 00:08:00.877 sys 0m4.496s 00:08:00.877 02:48:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.877 02:48:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:00.877 ************************************ 00:08:00.877 END TEST nvmf_example 00:08:00.877 ************************************ 00:08:00.877 02:48:51 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:00.877 02:48:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:00.877 02:48:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.877 02:48:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.877 ************************************ 00:08:00.877 START TEST nvmf_filesystem 00:08:00.877 ************************************ 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:00.877 * Looking for test storage... 00:08:00.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:00.877 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:00.878 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:00.878 #define SPDK_CONFIG_H 00:08:00.878 #define SPDK_CONFIG_APPS 1 00:08:00.879 #define SPDK_CONFIG_ARCH native 00:08:00.879 #undef SPDK_CONFIG_ASAN 00:08:00.879 #undef SPDK_CONFIG_AVAHI 00:08:00.879 #undef SPDK_CONFIG_CET 00:08:00.879 #define SPDK_CONFIG_COVERAGE 1 00:08:00.879 #define SPDK_CONFIG_CROSS_PREFIX 00:08:00.879 #undef SPDK_CONFIG_CRYPTO 00:08:00.879 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:00.879 #undef SPDK_CONFIG_CUSTOMOCF 00:08:00.879 #undef SPDK_CONFIG_DAOS 00:08:00.879 #define SPDK_CONFIG_DAOS_DIR 00:08:00.879 #define SPDK_CONFIG_DEBUG 1 00:08:00.879 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:00.879 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:00.879 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:00.879 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:00.879 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:00.879 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:00.879 #define SPDK_CONFIG_EXAMPLES 1 00:08:00.879 #undef SPDK_CONFIG_FC 00:08:00.879 #define SPDK_CONFIG_FC_PATH 00:08:00.879 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:00.879 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:00.879 #undef SPDK_CONFIG_FUSE 00:08:00.879 #undef SPDK_CONFIG_FUZZER 00:08:00.879 #define SPDK_CONFIG_FUZZER_LIB 00:08:00.879 #undef SPDK_CONFIG_GOLANG 00:08:00.879 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:00.879 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:00.879 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:00.879 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:00.879 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:00.879 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:00.879 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:00.879 #define SPDK_CONFIG_IDXD 1 00:08:00.879 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:00.879 #undef SPDK_CONFIG_IPSEC_MB 00:08:00.879 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:00.879 #define SPDK_CONFIG_ISAL 1 00:08:00.879 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:00.879 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:00.879 #define SPDK_CONFIG_LIBDIR 00:08:00.879 #undef SPDK_CONFIG_LTO 00:08:00.879 #define SPDK_CONFIG_MAX_LCORES 00:08:00.879 #define SPDK_CONFIG_NVME_CUSE 1 00:08:00.879 #undef SPDK_CONFIG_OCF 00:08:00.879 #define SPDK_CONFIG_OCF_PATH 00:08:00.879 #define SPDK_CONFIG_OPENSSL_PATH 00:08:00.879 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:00.879 #define SPDK_CONFIG_PGO_DIR 00:08:00.879 #undef SPDK_CONFIG_PGO_USE 00:08:00.879 #define SPDK_CONFIG_PREFIX /usr/local 00:08:00.879 #undef SPDK_CONFIG_RAID5F 00:08:00.879 #undef SPDK_CONFIG_RBD 00:08:00.879 #define SPDK_CONFIG_RDMA 1 00:08:00.879 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:00.879 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:00.879 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:00.879 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:00.879 #define SPDK_CONFIG_SHARED 1 00:08:00.879 #undef SPDK_CONFIG_SMA 00:08:00.879 #define SPDK_CONFIG_TESTS 1 00:08:00.879 #undef SPDK_CONFIG_TSAN 00:08:00.879 #define SPDK_CONFIG_UBLK 1 00:08:00.879 #define SPDK_CONFIG_UBSAN 1 00:08:00.879 #undef SPDK_CONFIG_UNIT_TESTS 00:08:00.879 #undef SPDK_CONFIG_URING 00:08:00.879 #define SPDK_CONFIG_URING_PATH 00:08:00.879 #undef SPDK_CONFIG_URING_ZNS 00:08:00.879 #undef SPDK_CONFIG_USDT 00:08:00.879 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:00.879 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:00.879 #define SPDK_CONFIG_VFIO_USER 1 00:08:00.879 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:00.879 #define SPDK_CONFIG_VHOST 1 00:08:00.879 #define SPDK_CONFIG_VIRTIO 1 00:08:00.879 #undef SPDK_CONFIG_VTUNE 00:08:00.879 #define SPDK_CONFIG_VTUNE_DIR 00:08:00.879 #define SPDK_CONFIG_WERROR 1 00:08:00.879 #define SPDK_CONFIG_WPDK_DIR 00:08:00.879 #undef SPDK_CONFIG_XNVME 00:08:00.879 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:00.879 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:00.880 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : main 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:00.881 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 242205 ]] 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 242205 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.mZZ8pA 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.mZZ8pA/tests/target /tmp/spdk.mZZ8pA 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=976711680 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4307718144 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=49957392384 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994733568 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12037341184 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30993989632 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390187008 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30995664896 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997368832 00:08:00.882 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1703936 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:00.883 * Looking for test storage... 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=49957392384 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=14251933696 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.883 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.884 02:48:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.784 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:02.785 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:02.785 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:02.785 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:02.785 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:08:02.785 00:08:02.785 --- 10.0.0.2 ping statistics --- 00:08:02.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.785 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:08:02.785 00:08:02.785 --- 10.0.0.1 ping statistics --- 00:08:02.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.785 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.785 ************************************ 00:08:02.785 START TEST nvmf_filesystem_no_in_capsule 00:08:02.785 ************************************ 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:02.785 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=243827 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 243827 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 243827 ']' 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:02.786 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.786 [2024-05-13 02:48:53.526993] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:02.786 [2024-05-13 02:48:53.527089] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.786 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.786 [2024-05-13 02:48:53.565299] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.044 [2024-05-13 02:48:53.594634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.044 [2024-05-13 02:48:53.688723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.044 [2024-05-13 02:48:53.688769] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.044 [2024-05-13 02:48:53.688783] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.044 [2024-05-13 02:48:53.688795] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.044 [2024-05-13 02:48:53.688805] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.044 [2024-05-13 02:48:53.689113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.044 [2024-05-13 02:48:53.689164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.044 [2024-05-13 02:48:53.689286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.044 [2024-05-13 02:48:53.689288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.044 [2024-05-13 02:48:53.828260] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.044 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.301 Malloc1 00:08:03.301 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.301 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:03.301 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.301 02:48:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.301 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.301 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:03.301 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.302 [2024-05-13 02:48:54.012590] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:03.302 [2024-05-13 02:48:54.012913] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:03.302 { 00:08:03.302 "name": "Malloc1", 00:08:03.302 "aliases": [ 00:08:03.302 "7bd4db8c-54b3-41ee-bb8e-e455b777dc61" 00:08:03.302 ], 00:08:03.302 "product_name": "Malloc disk", 00:08:03.302 "block_size": 512, 00:08:03.302 "num_blocks": 1048576, 00:08:03.302 "uuid": "7bd4db8c-54b3-41ee-bb8e-e455b777dc61", 00:08:03.302 "assigned_rate_limits": { 00:08:03.302 "rw_ios_per_sec": 0, 00:08:03.302 "rw_mbytes_per_sec": 0, 00:08:03.302 "r_mbytes_per_sec": 0, 00:08:03.302 "w_mbytes_per_sec": 0 00:08:03.302 }, 00:08:03.302 "claimed": true, 00:08:03.302 "claim_type": "exclusive_write", 00:08:03.302 "zoned": false, 00:08:03.302 "supported_io_types": { 00:08:03.302 "read": true, 00:08:03.302 "write": true, 00:08:03.302 "unmap": true, 00:08:03.302 "write_zeroes": true, 00:08:03.302 "flush": true, 00:08:03.302 "reset": true, 00:08:03.302 "compare": false, 00:08:03.302 "compare_and_write": false, 00:08:03.302 "abort": true, 00:08:03.302 "nvme_admin": false, 00:08:03.302 "nvme_io": false 00:08:03.302 }, 00:08:03.302 "memory_domains": [ 00:08:03.302 { 00:08:03.302 "dma_device_id": "system", 00:08:03.302 "dma_device_type": 1 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.302 "dma_device_type": 2 00:08:03.302 } 00:08:03.302 ], 00:08:03.302 "driver_specific": {} 00:08:03.302 } 00:08:03.302 ]' 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:03.302 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:03.559 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:03.559 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:03.559 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:03.559 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:03.559 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:04.124 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.124 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:04.124 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.124 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:04.124 02:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:06.020 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:06.277 02:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:06.841 02:48:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.773 ************************************ 00:08:07.773 START TEST filesystem_ext4 00:08:07.773 ************************************ 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:07.773 02:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:07.773 mke2fs 1.46.5 (30-Dec-2021) 00:08:08.031 Discarding device blocks: 0/522240 done 00:08:08.031 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:08.031 Filesystem UUID: 4f7083a1-d6a8-40dc-a4e0-bc6f812dd93c 00:08:08.031 Superblock backups stored on blocks: 00:08:08.031 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:08.031 00:08:08.031 Allocating group tables: 0/64 done 00:08:08.031 Writing inode tables: 0/64 done 00:08:08.625 Creating journal (8192 blocks): done 00:08:08.625 Writing superblocks and filesystem accounting information: 0/64 done 00:08:08.625 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 243827 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.625 00:08:08.625 real 0m0.808s 00:08:08.625 user 0m0.013s 00:08:08.625 sys 0m0.033s 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:08.625 ************************************ 00:08:08.625 END TEST filesystem_ext4 00:08:08.625 ************************************ 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.625 ************************************ 00:08:08.625 START TEST filesystem_btrfs 00:08:08.625 ************************************ 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:08.625 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:08.883 btrfs-progs v6.6.2 00:08:08.883 See https://btrfs.readthedocs.io for more information. 00:08:08.883 00:08:08.883 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:08.883 NOTE: several default settings have changed in version 5.15, please make sure 00:08:08.883 this does not affect your deployments: 00:08:08.883 - DUP for metadata (-m dup) 00:08:08.883 - enabled no-holes (-O no-holes) 00:08:08.883 - enabled free-space-tree (-R free-space-tree) 00:08:08.883 00:08:08.883 Label: (null) 00:08:08.883 UUID: 73d1e6dd-efd6-4c27-a12f-27e708f3c7d0 00:08:08.883 Node size: 16384 00:08:08.883 Sector size: 4096 00:08:08.883 Filesystem size: 510.00MiB 00:08:08.883 Block group profiles: 00:08:08.883 Data: single 8.00MiB 00:08:08.883 Metadata: DUP 32.00MiB 00:08:08.883 System: DUP 8.00MiB 00:08:08.883 SSD detected: yes 00:08:08.883 Zoned device: no 00:08:08.883 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:08.883 Runtime features: free-space-tree 00:08:08.883 Checksum: crc32c 00:08:08.883 Number of devices: 1 00:08:08.883 Devices: 00:08:08.883 ID SIZE PATH 00:08:08.883 1 510.00MiB /dev/nvme0n1p1 00:08:08.883 00:08:08.883 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:08.883 02:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 243827 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.450 00:08:09.450 real 0m0.755s 00:08:09.450 user 0m0.019s 00:08:09.450 sys 0m0.040s 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.450 ************************************ 00:08:09.450 END TEST filesystem_btrfs 00:08:09.450 ************************************ 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.450 ************************************ 00:08:09.450 START TEST filesystem_xfs 00:08:09.450 ************************************ 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:09.450 02:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:09.708 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:09.708 = sectsz=512 attr=2, projid32bit=1 00:08:09.708 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:09.708 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:09.708 data = bsize=4096 blocks=130560, imaxpct=25 00:08:09.708 = sunit=0 swidth=0 blks 00:08:09.708 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:09.708 log =internal log bsize=4096 blocks=16384, version=2 00:08:09.708 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:09.708 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:10.639 Discarding blocks...Done. 00:08:10.639 02:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:10.639 02:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 243827 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.159 00:08:13.159 real 0m3.553s 00:08:13.159 user 0m0.016s 00:08:13.159 sys 0m0.042s 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:13.159 ************************************ 00:08:13.159 END TEST filesystem_xfs 00:08:13.159 ************************************ 00:08:13.159 02:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 243827 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 243827 ']' 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 243827 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 243827 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 243827' 00:08:13.416 killing process with pid 243827 00:08:13.416 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 243827 00:08:13.417 [2024-05-13 02:49:04.144576] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:13.417 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 243827 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:13.982 00:08:13.982 real 0m11.087s 00:08:13.982 user 0m42.373s 00:08:13.982 sys 0m1.627s 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.982 ************************************ 00:08:13.982 END TEST nvmf_filesystem_no_in_capsule 00:08:13.982 ************************************ 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.982 ************************************ 00:08:13.982 START TEST nvmf_filesystem_in_capsule 00:08:13.982 ************************************ 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=245385 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 245385 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 245385 ']' 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:13.982 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.982 [2024-05-13 02:49:04.665357] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:13.982 [2024-05-13 02:49:04.665472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.982 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.983 [2024-05-13 02:49:04.703633] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.983 [2024-05-13 02:49:04.736171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.242 [2024-05-13 02:49:04.827335] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.242 [2024-05-13 02:49:04.827389] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.242 [2024-05-13 02:49:04.827412] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.242 [2024-05-13 02:49:04.827426] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.242 [2024-05-13 02:49:04.827439] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.242 [2024-05-13 02:49:04.827532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.242 [2024-05-13 02:49:04.827607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.242 [2024-05-13 02:49:04.827708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.242 [2024-05-13 02:49:04.827731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.242 [2024-05-13 02:49:04.974557] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.242 02:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.500 Malloc1 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.500 [2024-05-13 02:49:05.161766] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:14.500 [2024-05-13 02:49:05.162099] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:14.500 { 00:08:14.500 "name": "Malloc1", 00:08:14.500 "aliases": [ 00:08:14.500 "5c75964b-2c00-4525-8a4d-9c4d01d28f4f" 00:08:14.500 ], 00:08:14.500 "product_name": "Malloc disk", 00:08:14.500 "block_size": 512, 00:08:14.500 "num_blocks": 1048576, 00:08:14.500 "uuid": "5c75964b-2c00-4525-8a4d-9c4d01d28f4f", 00:08:14.500 "assigned_rate_limits": { 00:08:14.500 "rw_ios_per_sec": 0, 00:08:14.500 "rw_mbytes_per_sec": 0, 00:08:14.500 "r_mbytes_per_sec": 0, 00:08:14.500 "w_mbytes_per_sec": 0 00:08:14.500 }, 00:08:14.500 "claimed": true, 00:08:14.500 "claim_type": "exclusive_write", 00:08:14.500 "zoned": false, 00:08:14.500 "supported_io_types": { 00:08:14.500 "read": true, 00:08:14.500 "write": true, 00:08:14.500 "unmap": true, 00:08:14.500 "write_zeroes": true, 00:08:14.500 "flush": true, 00:08:14.500 "reset": true, 00:08:14.500 "compare": false, 00:08:14.500 "compare_and_write": false, 00:08:14.500 "abort": true, 00:08:14.500 "nvme_admin": false, 00:08:14.500 "nvme_io": false 00:08:14.500 }, 00:08:14.500 "memory_domains": [ 00:08:14.500 { 00:08:14.500 "dma_device_id": "system", 00:08:14.500 "dma_device_type": 1 00:08:14.500 }, 00:08:14.500 { 00:08:14.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.500 "dma_device_type": 2 00:08:14.500 } 00:08:14.500 ], 00:08:14.500 "driver_specific": {} 00:08:14.500 } 00:08:14.500 ]' 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:14.500 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.433 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.433 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:15.433 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.433 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:15.433 02:49:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:17.329 02:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:17.586 02:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:18.150 02:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 ************************************ 00:08:19.083 START TEST filesystem_in_capsule_ext4 00:08:19.083 ************************************ 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:19.083 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:19.084 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:19.084 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:19.084 02:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:19.084 mke2fs 1.46.5 (30-Dec-2021) 00:08:19.342 Discarding device blocks: 0/522240 done 00:08:19.342 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:19.342 Filesystem UUID: 40400e88-9e6c-4745-9ad3-a76f0771953f 00:08:19.342 Superblock backups stored on blocks: 00:08:19.342 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:19.342 00:08:19.342 Allocating group tables: 0/64 done 00:08:19.342 Writing inode tables: 0/64 done 00:08:20.275 Creating journal (8192 blocks): done 00:08:20.275 Writing superblocks and filesystem accounting information: 0/64 done 00:08:20.275 00:08:20.275 02:49:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:20.275 02:49:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.840 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.840 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:20.840 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.840 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 245385 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.099 00:08:21.099 real 0m1.875s 00:08:21.099 user 0m0.014s 00:08:21.099 sys 0m0.039s 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:21.099 ************************************ 00:08:21.099 END TEST filesystem_in_capsule_ext4 00:08:21.099 ************************************ 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.099 ************************************ 00:08:21.099 START TEST filesystem_in_capsule_btrfs 00:08:21.099 ************************************ 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:21.099 02:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:21.357 btrfs-progs v6.6.2 00:08:21.357 See https://btrfs.readthedocs.io for more information. 00:08:21.357 00:08:21.357 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:21.357 NOTE: several default settings have changed in version 5.15, please make sure 00:08:21.357 this does not affect your deployments: 00:08:21.357 - DUP for metadata (-m dup) 00:08:21.357 - enabled no-holes (-O no-holes) 00:08:21.357 - enabled free-space-tree (-R free-space-tree) 00:08:21.357 00:08:21.357 Label: (null) 00:08:21.357 UUID: b49f04c2-35a3-48c0-a340-e5df4e0189cc 00:08:21.357 Node size: 16384 00:08:21.357 Sector size: 4096 00:08:21.357 Filesystem size: 510.00MiB 00:08:21.357 Block group profiles: 00:08:21.357 Data: single 8.00MiB 00:08:21.357 Metadata: DUP 32.00MiB 00:08:21.357 System: DUP 8.00MiB 00:08:21.357 SSD detected: yes 00:08:21.357 Zoned device: no 00:08:21.357 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:21.357 Runtime features: free-space-tree 00:08:21.357 Checksum: crc32c 00:08:21.357 Number of devices: 1 00:08:21.357 Devices: 00:08:21.357 ID SIZE PATH 00:08:21.357 1 510.00MiB /dev/nvme0n1p1 00:08:21.357 00:08:21.357 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:21.357 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 245385 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.290 00:08:22.290 real 0m1.205s 00:08:22.290 user 0m0.010s 00:08:22.290 sys 0m0.052s 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:22.290 ************************************ 00:08:22.290 END TEST filesystem_in_capsule_btrfs 00:08:22.290 ************************************ 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.290 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.290 ************************************ 00:08:22.290 START TEST filesystem_in_capsule_xfs 00:08:22.290 ************************************ 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:22.291 02:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:22.291 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:22.291 = sectsz=512 attr=2, projid32bit=1 00:08:22.291 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:22.291 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:22.291 data = bsize=4096 blocks=130560, imaxpct=25 00:08:22.291 = sunit=0 swidth=0 blks 00:08:22.291 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:22.291 log =internal log bsize=4096 blocks=16384, version=2 00:08:22.291 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:22.291 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:23.666 Discarding blocks...Done. 00:08:23.666 02:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:23.666 02:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.602 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 245385 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.859 00:08:25.859 real 0m3.481s 00:08:25.859 user 0m0.014s 00:08:25.859 sys 0m0.035s 00:08:25.859 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.860 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:25.860 ************************************ 00:08:25.860 END TEST filesystem_in_capsule_xfs 00:08:25.860 ************************************ 00:08:25.860 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:26.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 245385 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 245385 ']' 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 245385 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 245385 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 245385' 00:08:26.119 killing process with pid 245385 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 245385 00:08:26.119 [2024-05-13 02:49:16.819899] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:26.119 02:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 245385 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:26.687 00:08:26.687 real 0m12.620s 00:08:26.687 user 0m48.400s 00:08:26.687 sys 0m1.755s 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.687 ************************************ 00:08:26.687 END TEST nvmf_filesystem_in_capsule 00:08:26.687 ************************************ 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.687 rmmod nvme_tcp 00:08:26.687 rmmod nvme_fabrics 00:08:26.687 rmmod nvme_keyring 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.687 02:49:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.594 02:49:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:28.594 00:08:28.594 real 0m28.129s 00:08:28.594 user 1m31.642s 00:08:28.594 sys 0m4.934s 00:08:28.594 02:49:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.594 02:49:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.594 ************************************ 00:08:28.594 END TEST nvmf_filesystem 00:08:28.594 ************************************ 00:08:28.594 02:49:19 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:28.594 02:49:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:28.594 02:49:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.594 02:49:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.852 ************************************ 00:08:28.852 START TEST nvmf_target_discovery 00:08:28.852 ************************************ 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:28.852 * Looking for test storage... 00:08:28.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.852 02:49:19 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.853 02:49:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.758 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:08:31.018 00:08:31.018 --- 10.0.0.2 ping statistics --- 00:08:31.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.018 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:08:31.018 00:08:31.018 --- 10.0.0.1 ping statistics --- 00:08:31.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.018 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=248893 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 248893 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 248893 ']' 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:31.018 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.018 [2024-05-13 02:49:21.666837] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:31.018 [2024-05-13 02:49:21.666914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.018 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.018 [2024-05-13 02:49:21.705472] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.018 [2024-05-13 02:49:21.731902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.018 [2024-05-13 02:49:21.817849] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.018 [2024-05-13 02:49:21.817902] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.018 [2024-05-13 02:49:21.817917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.018 [2024-05-13 02:49:21.817929] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.018 [2024-05-13 02:49:21.817939] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.018 [2024-05-13 02:49:21.818016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.018 [2024-05-13 02:49:21.818087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.018 [2024-05-13 02:49:21.818133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.018 [2024-05-13 02:49:21.818136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 [2024-05-13 02:49:21.971652] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 Null1 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 [2024-05-13 02:49:22.011724] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:31.277 [2024-05-13 02:49:22.012022] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 Null2 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 Null3 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.277 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.535 Null4 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:31.535 00:08:31.535 Discovery Log Number of Records 6, Generation counter 6 00:08:31.535 =====Discovery Log Entry 0====== 00:08:31.535 trtype: tcp 00:08:31.535 adrfam: ipv4 00:08:31.535 subtype: current discovery subsystem 00:08:31.535 treq: not required 00:08:31.535 portid: 0 00:08:31.535 trsvcid: 4420 00:08:31.535 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.535 traddr: 10.0.0.2 00:08:31.535 eflags: explicit discovery connections, duplicate discovery information 00:08:31.535 sectype: none 00:08:31.535 =====Discovery Log Entry 1====== 00:08:31.535 trtype: tcp 00:08:31.535 adrfam: ipv4 00:08:31.535 subtype: nvme subsystem 00:08:31.535 treq: not required 00:08:31.535 portid: 0 00:08:31.535 trsvcid: 4420 00:08:31.535 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:31.535 traddr: 10.0.0.2 00:08:31.535 eflags: none 00:08:31.535 sectype: none 00:08:31.535 =====Discovery Log Entry 2====== 00:08:31.535 trtype: tcp 00:08:31.535 adrfam: ipv4 00:08:31.535 subtype: nvme subsystem 00:08:31.535 treq: not required 00:08:31.535 portid: 0 00:08:31.535 trsvcid: 4420 00:08:31.535 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:31.535 traddr: 10.0.0.2 00:08:31.535 eflags: none 00:08:31.535 sectype: none 00:08:31.535 =====Discovery Log Entry 3====== 00:08:31.535 trtype: tcp 00:08:31.535 adrfam: ipv4 00:08:31.535 subtype: nvme subsystem 00:08:31.535 treq: not required 00:08:31.535 portid: 0 00:08:31.535 trsvcid: 4420 00:08:31.535 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:31.535 traddr: 10.0.0.2 00:08:31.535 eflags: none 00:08:31.535 sectype: none 00:08:31.535 =====Discovery Log Entry 4====== 00:08:31.535 trtype: tcp 00:08:31.535 adrfam: ipv4 00:08:31.535 subtype: nvme subsystem 00:08:31.535 treq: not required 00:08:31.535 portid: 0 00:08:31.535 trsvcid: 4420 00:08:31.535 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:31.535 traddr: 10.0.0.2 00:08:31.535 eflags: none 00:08:31.535 sectype: none 00:08:31.535 =====Discovery Log Entry 5====== 00:08:31.535 trtype: tcp 00:08:31.535 adrfam: ipv4 00:08:31.535 subtype: discovery subsystem referral 00:08:31.535 treq: not required 00:08:31.535 portid: 0 00:08:31.535 trsvcid: 4430 00:08:31.535 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.535 traddr: 10.0.0.2 00:08:31.535 eflags: none 00:08:31.535 sectype: none 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:31.535 Perform nvmf subsystem discovery via RPC 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.535 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.535 [ 00:08:31.535 { 00:08:31.535 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:31.535 "subtype": "Discovery", 00:08:31.535 "listen_addresses": [ 00:08:31.535 { 00:08:31.535 "trtype": "TCP", 00:08:31.535 "adrfam": "IPv4", 00:08:31.535 "traddr": "10.0.0.2", 00:08:31.535 "trsvcid": "4420" 00:08:31.535 } 00:08:31.535 ], 00:08:31.535 "allow_any_host": true, 00:08:31.535 "hosts": [] 00:08:31.535 }, 00:08:31.535 { 00:08:31.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.536 "subtype": "NVMe", 00:08:31.536 "listen_addresses": [ 00:08:31.536 { 00:08:31.536 "trtype": "TCP", 00:08:31.536 "adrfam": "IPv4", 00:08:31.536 "traddr": "10.0.0.2", 00:08:31.536 "trsvcid": "4420" 00:08:31.536 } 00:08:31.536 ], 00:08:31.536 "allow_any_host": true, 00:08:31.536 "hosts": [], 00:08:31.536 "serial_number": "SPDK00000000000001", 00:08:31.536 "model_number": "SPDK bdev Controller", 00:08:31.536 "max_namespaces": 32, 00:08:31.536 "min_cntlid": 1, 00:08:31.536 "max_cntlid": 65519, 00:08:31.536 "namespaces": [ 00:08:31.536 { 00:08:31.536 "nsid": 1, 00:08:31.536 "bdev_name": "Null1", 00:08:31.536 "name": "Null1", 00:08:31.536 "nguid": "B4103BBA0DA347719CA0A25583296DA3", 00:08:31.536 "uuid": "b4103bba-0da3-4771-9ca0-a25583296da3" 00:08:31.536 } 00:08:31.536 ] 00:08:31.536 }, 00:08:31.536 { 00:08:31.536 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:31.536 "subtype": "NVMe", 00:08:31.536 "listen_addresses": [ 00:08:31.536 { 00:08:31.536 "trtype": "TCP", 00:08:31.536 "adrfam": "IPv4", 00:08:31.536 "traddr": "10.0.0.2", 00:08:31.536 "trsvcid": "4420" 00:08:31.536 } 00:08:31.536 ], 00:08:31.536 "allow_any_host": true, 00:08:31.536 "hosts": [], 00:08:31.536 "serial_number": "SPDK00000000000002", 00:08:31.536 "model_number": "SPDK bdev Controller", 00:08:31.536 "max_namespaces": 32, 00:08:31.536 "min_cntlid": 1, 00:08:31.536 "max_cntlid": 65519, 00:08:31.536 "namespaces": [ 00:08:31.536 { 00:08:31.536 "nsid": 1, 00:08:31.536 "bdev_name": "Null2", 00:08:31.536 "name": "Null2", 00:08:31.536 "nguid": "B36AAC9C5CF04951B17AE1FBA6707CE4", 00:08:31.536 "uuid": "b36aac9c-5cf0-4951-b17a-e1fba6707ce4" 00:08:31.536 } 00:08:31.536 ] 00:08:31.536 }, 00:08:31.536 { 00:08:31.536 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:31.536 "subtype": "NVMe", 00:08:31.536 "listen_addresses": [ 00:08:31.536 { 00:08:31.536 "trtype": "TCP", 00:08:31.536 "adrfam": "IPv4", 00:08:31.536 "traddr": "10.0.0.2", 00:08:31.536 "trsvcid": "4420" 00:08:31.536 } 00:08:31.536 ], 00:08:31.536 "allow_any_host": true, 00:08:31.536 "hosts": [], 00:08:31.536 "serial_number": "SPDK00000000000003", 00:08:31.536 "model_number": "SPDK bdev Controller", 00:08:31.536 "max_namespaces": 32, 00:08:31.536 "min_cntlid": 1, 00:08:31.536 "max_cntlid": 65519, 00:08:31.536 "namespaces": [ 00:08:31.536 { 00:08:31.536 "nsid": 1, 00:08:31.536 "bdev_name": "Null3", 00:08:31.536 "name": "Null3", 00:08:31.536 "nguid": "6BD78B8585494829B5B428346C9A60B6", 00:08:31.536 "uuid": "6bd78b85-8549-4829-b5b4-28346c9a60b6" 00:08:31.536 } 00:08:31.536 ] 00:08:31.536 }, 00:08:31.536 { 00:08:31.536 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:31.536 "subtype": "NVMe", 00:08:31.536 "listen_addresses": [ 00:08:31.536 { 00:08:31.536 "trtype": "TCP", 00:08:31.536 "adrfam": "IPv4", 00:08:31.536 "traddr": "10.0.0.2", 00:08:31.536 "trsvcid": "4420" 00:08:31.536 } 00:08:31.536 ], 00:08:31.536 "allow_any_host": true, 00:08:31.536 "hosts": [], 00:08:31.536 "serial_number": "SPDK00000000000004", 00:08:31.536 "model_number": "SPDK bdev Controller", 00:08:31.536 "max_namespaces": 32, 00:08:31.536 "min_cntlid": 1, 00:08:31.536 "max_cntlid": 65519, 00:08:31.536 "namespaces": [ 00:08:31.536 { 00:08:31.536 "nsid": 1, 00:08:31.536 "bdev_name": "Null4", 00:08:31.536 "name": "Null4", 00:08:31.536 "nguid": "EE996178C05841A2A569EA962D516493", 00:08:31.536 "uuid": "ee996178-c058-41a2-a569-ea962d516493" 00:08:31.536 } 00:08:31.536 ] 00:08:31.536 } 00:08:31.536 ] 00:08:31.536 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.536 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:31.536 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.536 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.536 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.536 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.794 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.795 rmmod nvme_tcp 00:08:31.795 rmmod nvme_fabrics 00:08:31.795 rmmod nvme_keyring 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 248893 ']' 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 248893 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 248893 ']' 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 248893 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 248893 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 248893' 00:08:31.795 killing process with pid 248893 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 248893 00:08:31.795 [2024-05-13 02:49:22.523168] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:31.795 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 248893 00:08:32.053 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.053 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.053 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.053 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.053 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.053 02:49:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.053 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.053 02:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.589 02:49:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.589 00:08:34.589 real 0m5.401s 00:08:34.589 user 0m4.516s 00:08:34.589 sys 0m1.772s 00:08:34.589 02:49:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:34.589 02:49:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.589 ************************************ 00:08:34.589 END TEST nvmf_target_discovery 00:08:34.589 ************************************ 00:08:34.589 02:49:24 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:34.589 02:49:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:34.589 02:49:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:34.589 02:49:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.589 ************************************ 00:08:34.589 START TEST nvmf_referrals 00:08:34.589 ************************************ 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:34.589 * Looking for test storage... 00:08:34.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.589 02:49:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.494 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:36.495 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:36.495 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:36.495 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:36.495 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.495 02:49:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:36.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:36.495 00:08:36.495 --- 10.0.0.2 ping statistics --- 00:08:36.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.495 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:08:36.495 00:08:36.495 --- 10.0.0.1 ping statistics --- 00:08:36.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.495 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=250976 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 250976 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 250976 ']' 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:36.495 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.495 [2024-05-13 02:49:27.158339] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:36.495 [2024-05-13 02:49:27.158446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.495 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.495 [2024-05-13 02:49:27.197076] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.495 [2024-05-13 02:49:27.229251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.754 [2024-05-13 02:49:27.323943] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.754 [2024-05-13 02:49:27.324018] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.754 [2024-05-13 02:49:27.324048] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.754 [2024-05-13 02:49:27.324060] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.754 [2024-05-13 02:49:27.324070] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.754 [2024-05-13 02:49:27.324159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.754 [2024-05-13 02:49:27.324253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.754 [2024-05-13 02:49:27.324270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.754 [2024-05-13 02:49:27.324273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 [2024-05-13 02:49:27.479544] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 [2024-05-13 02:49:27.491511] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:36.754 [2024-05-13 02:49:27.491855] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.012 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.270 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.270 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.270 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.270 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:37.270 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.270 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.270 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:37.271 02:49:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.271 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:37.271 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:37.271 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:37.271 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.271 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.271 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.271 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.271 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.528 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:37.528 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:37.528 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:37.528 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:37.529 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.786 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:37.786 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:37.786 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:37.786 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.786 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.786 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.787 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.045 rmmod nvme_tcp 00:08:38.045 rmmod nvme_fabrics 00:08:38.045 rmmod nvme_keyring 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 250976 ']' 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 250976 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 250976 ']' 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 250976 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:38.045 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 250976 00:08:38.304 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:38.304 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:38.304 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 250976' 00:08:38.304 killing process with pid 250976 00:08:38.304 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 250976 00:08:38.304 [2024-05-13 02:49:28.860991] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:38.304 02:49:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 250976 00:08:38.304 02:49:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.304 02:49:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.304 02:49:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.304 02:49:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.304 02:49:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.304 02:49:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.304 02:49:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.304 02:49:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.869 02:49:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:40.869 00:08:40.869 real 0m6.265s 00:08:40.869 user 0m8.390s 00:08:40.869 sys 0m1.942s 00:08:40.869 02:49:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.869 02:49:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.869 ************************************ 00:08:40.869 END TEST nvmf_referrals 00:08:40.869 ************************************ 00:08:40.869 02:49:31 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:40.869 02:49:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:40.869 02:49:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.869 02:49:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.869 ************************************ 00:08:40.869 START TEST nvmf_connect_disconnect 00:08:40.869 ************************************ 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:40.869 * Looking for test storage... 00:08:40.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.869 02:49:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.774 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:42.775 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:42.775 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:42.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:42.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:08:42.775 00:08:42.775 --- 10.0.0.2 ping statistics --- 00:08:42.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.775 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:08:42.775 00:08:42.775 --- 10.0.0.1 ping statistics --- 00:08:42.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.775 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=253262 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 253262 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 253262 ']' 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:42.775 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.775 [2024-05-13 02:49:33.311317] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:42.775 [2024-05-13 02:49:33.311407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.775 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.775 [2024-05-13 02:49:33.350344] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:42.775 [2024-05-13 02:49:33.376581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.776 [2024-05-13 02:49:33.466909] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.776 [2024-05-13 02:49:33.466967] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.776 [2024-05-13 02:49:33.466996] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.776 [2024-05-13 02:49:33.467007] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.776 [2024-05-13 02:49:33.467017] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.776 [2024-05-13 02:49:33.467083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.776 [2024-05-13 02:49:33.467149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.776 [2024-05-13 02:49:33.467214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.776 [2024-05-13 02:49:33.467217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.040 [2024-05-13 02:49:33.620580] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.040 [2024-05-13 02:49:33.671136] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:43.040 [2024-05-13 02:49:33.671406] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:43.040 02:49:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:45.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.517 rmmod nvme_tcp 00:12:29.517 rmmod nvme_fabrics 00:12:29.517 rmmod nvme_keyring 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 253262 ']' 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 253262 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 253262 ']' 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 253262 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 253262 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 253262' 00:12:29.517 killing process with pid 253262 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 253262 00:12:29.517 [2024-05-13 02:53:20.169045] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:29.517 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 253262 00:12:29.776 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.776 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.776 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.776 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.776 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.776 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.776 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.776 02:53:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.682 02:53:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.682 00:12:31.682 real 3m51.282s 00:12:31.682 user 14m40.244s 00:12:31.682 sys 0m31.495s 00:12:31.682 02:53:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.682 02:53:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.682 ************************************ 00:12:31.682 END TEST nvmf_connect_disconnect 00:12:31.682 ************************************ 00:12:31.942 02:53:22 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:31.942 02:53:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:31.942 02:53:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:31.942 02:53:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.942 ************************************ 00:12:31.942 START TEST nvmf_multitarget 00:12:31.942 ************************************ 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:31.942 * Looking for test storage... 00:12:31.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.942 02:53:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:33.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:33.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.850 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:33.851 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:33.851 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.851 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:12:34.110 00:12:34.110 --- 10.0.0.2 ping statistics --- 00:12:34.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.110 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:12:34.110 00:12:34.110 --- 10.0.0.1 ping statistics --- 00:12:34.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.110 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=283776 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 283776 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 283776 ']' 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:34.110 02:53:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:34.110 [2024-05-13 02:53:24.813091] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:12:34.110 [2024-05-13 02:53:24.813178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.110 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.110 [2024-05-13 02:53:24.853139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:34.110 [2024-05-13 02:53:24.885081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.368 [2024-05-13 02:53:24.980135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.368 [2024-05-13 02:53:24.980199] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.368 [2024-05-13 02:53:24.980216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.368 [2024-05-13 02:53:24.980229] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.368 [2024-05-13 02:53:24.980241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.368 [2024-05-13 02:53:24.980308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.368 [2024-05-13 02:53:24.980364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.368 [2024-05-13 02:53:24.980415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.368 [2024-05-13 02:53:24.980417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.368 02:53:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:34.368 02:53:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:34.368 02:53:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.368 02:53:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:34.368 02:53:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:34.368 02:53:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.368 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:34.368 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:34.368 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:34.625 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:34.625 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:34.625 "nvmf_tgt_1" 00:12:34.625 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:34.883 "nvmf_tgt_2" 00:12:34.883 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:34.883 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:34.883 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:34.883 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:35.141 true 00:12:35.141 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:35.141 true 00:12:35.141 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.141 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.401 rmmod nvme_tcp 00:12:35.401 rmmod nvme_fabrics 00:12:35.401 rmmod nvme_keyring 00:12:35.401 02:53:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 283776 ']' 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 283776 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 283776 ']' 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 283776 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 283776 00:12:35.401 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:35.402 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:35.402 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 283776' 00:12:35.402 killing process with pid 283776 00:12:35.402 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 283776 00:12:35.402 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 283776 00:12:35.662 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.662 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.662 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.662 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.663 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.663 02:53:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.663 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.663 02:53:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.569 02:53:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.569 00:12:37.569 real 0m5.772s 00:12:37.569 user 0m6.463s 00:12:37.569 sys 0m1.994s 00:12:37.569 02:53:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:37.570 02:53:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:37.570 ************************************ 00:12:37.570 END TEST nvmf_multitarget 00:12:37.570 ************************************ 00:12:37.570 02:53:28 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:37.570 02:53:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:37.570 02:53:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:37.570 02:53:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.570 ************************************ 00:12:37.570 START TEST nvmf_rpc 00:12:37.570 ************************************ 00:12:37.570 02:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:37.829 * Looking for test storage... 00:12:37.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.829 02:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:39.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:39.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:39.770 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:39.770 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.770 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:12:39.771 00:12:39.771 --- 10.0.0.2 ping statistics --- 00:12:39.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.771 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:12:39.771 00:12:39.771 --- 10.0.0.1 ping statistics --- 00:12:39.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.771 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=285871 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 285871 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 285871 ']' 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:39.771 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.771 [2024-05-13 02:53:30.535286] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:12:39.771 [2024-05-13 02:53:30.535383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.031 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.031 [2024-05-13 02:53:30.577728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:40.031 [2024-05-13 02:53:30.610611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.031 [2024-05-13 02:53:30.706343] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.031 [2024-05-13 02:53:30.706405] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.031 [2024-05-13 02:53:30.706421] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.031 [2024-05-13 02:53:30.706435] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.031 [2024-05-13 02:53:30.706447] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.031 [2024-05-13 02:53:30.706516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.031 [2024-05-13 02:53:30.706572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.031 [2024-05-13 02:53:30.706637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.031 [2024-05-13 02:53:30.706640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:40.292 "tick_rate": 2700000000, 00:12:40.292 "poll_groups": [ 00:12:40.292 { 00:12:40.292 "name": "nvmf_tgt_poll_group_000", 00:12:40.292 "admin_qpairs": 0, 00:12:40.292 "io_qpairs": 0, 00:12:40.292 "current_admin_qpairs": 0, 00:12:40.292 "current_io_qpairs": 0, 00:12:40.292 "pending_bdev_io": 0, 00:12:40.292 "completed_nvme_io": 0, 00:12:40.292 "transports": [] 00:12:40.292 }, 00:12:40.292 { 00:12:40.292 "name": "nvmf_tgt_poll_group_001", 00:12:40.292 "admin_qpairs": 0, 00:12:40.292 "io_qpairs": 0, 00:12:40.292 "current_admin_qpairs": 0, 00:12:40.292 "current_io_qpairs": 0, 00:12:40.292 "pending_bdev_io": 0, 00:12:40.292 "completed_nvme_io": 0, 00:12:40.292 "transports": [] 00:12:40.292 }, 00:12:40.292 { 00:12:40.292 "name": "nvmf_tgt_poll_group_002", 00:12:40.292 "admin_qpairs": 0, 00:12:40.292 "io_qpairs": 0, 00:12:40.292 "current_admin_qpairs": 0, 00:12:40.292 "current_io_qpairs": 0, 00:12:40.292 "pending_bdev_io": 0, 00:12:40.292 "completed_nvme_io": 0, 00:12:40.292 "transports": [] 00:12:40.292 }, 00:12:40.292 { 00:12:40.292 "name": "nvmf_tgt_poll_group_003", 00:12:40.292 "admin_qpairs": 0, 00:12:40.292 "io_qpairs": 0, 00:12:40.292 "current_admin_qpairs": 0, 00:12:40.292 "current_io_qpairs": 0, 00:12:40.292 "pending_bdev_io": 0, 00:12:40.292 "completed_nvme_io": 0, 00:12:40.292 "transports": [] 00:12:40.292 } 00:12:40.292 ] 00:12:40.292 }' 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.292 [2024-05-13 02:53:30.966018] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.292 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:40.292 "tick_rate": 2700000000, 00:12:40.292 "poll_groups": [ 00:12:40.292 { 00:12:40.292 "name": "nvmf_tgt_poll_group_000", 00:12:40.292 "admin_qpairs": 0, 00:12:40.292 "io_qpairs": 0, 00:12:40.292 "current_admin_qpairs": 0, 00:12:40.292 "current_io_qpairs": 0, 00:12:40.292 "pending_bdev_io": 0, 00:12:40.292 "completed_nvme_io": 0, 00:12:40.292 "transports": [ 00:12:40.292 { 00:12:40.292 "trtype": "TCP" 00:12:40.292 } 00:12:40.292 ] 00:12:40.292 }, 00:12:40.292 { 00:12:40.292 "name": "nvmf_tgt_poll_group_001", 00:12:40.292 "admin_qpairs": 0, 00:12:40.292 "io_qpairs": 0, 00:12:40.292 "current_admin_qpairs": 0, 00:12:40.292 "current_io_qpairs": 0, 00:12:40.292 "pending_bdev_io": 0, 00:12:40.292 "completed_nvme_io": 0, 00:12:40.292 "transports": [ 00:12:40.292 { 00:12:40.292 "trtype": "TCP" 00:12:40.292 } 00:12:40.292 ] 00:12:40.292 }, 00:12:40.292 { 00:12:40.292 "name": "nvmf_tgt_poll_group_002", 00:12:40.292 "admin_qpairs": 0, 00:12:40.292 "io_qpairs": 0, 00:12:40.292 "current_admin_qpairs": 0, 00:12:40.292 "current_io_qpairs": 0, 00:12:40.293 "pending_bdev_io": 0, 00:12:40.293 "completed_nvme_io": 0, 00:12:40.293 "transports": [ 00:12:40.293 { 00:12:40.293 "trtype": "TCP" 00:12:40.293 } 00:12:40.293 ] 00:12:40.293 }, 00:12:40.293 { 00:12:40.293 "name": "nvmf_tgt_poll_group_003", 00:12:40.293 "admin_qpairs": 0, 00:12:40.293 "io_qpairs": 0, 00:12:40.293 "current_admin_qpairs": 0, 00:12:40.293 "current_io_qpairs": 0, 00:12:40.293 "pending_bdev_io": 0, 00:12:40.293 "completed_nvme_io": 0, 00:12:40.293 "transports": [ 00:12:40.293 { 00:12:40.293 "trtype": "TCP" 00:12:40.293 } 00:12:40.293 ] 00:12:40.293 } 00:12:40.293 ] 00:12:40.293 }' 00:12:40.293 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:40.293 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:40.293 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:40.293 02:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.293 Malloc1 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.293 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.552 [2024-05-13 02:53:31.119115] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:40.552 [2024-05-13 02:53:31.119424] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:40.552 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:40.552 [2024-05-13 02:53:31.141974] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:40.552 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:40.552 could not add new controller: failed to write to nvme-fabrics device 00:12:40.553 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:40.553 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:40.553 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:40.553 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:40.553 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.553 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.553 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.553 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.553 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.119 02:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.120 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:41.120 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.120 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:41.120 02:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:43.023 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:43.023 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:43.023 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.023 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:43.023 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.023 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:43.023 02:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.282 [2024-05-13 02:53:33.901339] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:43.282 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:43.282 could not add new controller: failed to write to nvme-fabrics device 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.282 02:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.849 02:53:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.849 02:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:43.849 02:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.849 02:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:43.849 02:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:45.756 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:45.756 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:45.756 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.756 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:45.756 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.756 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:45.756 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.016 [2024-05-13 02:53:36.659965] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.016 02:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.584 02:53:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.584 02:53:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:46.584 02:53:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.584 02:53:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:46.584 02:53:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.118 [2024-05-13 02:53:39.474948] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.118 02:53:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.378 02:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.378 02:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:49.378 02:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.378 02:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:49.378 02:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:51.284 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:51.284 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:51.284 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.284 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:51.284 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.284 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:51.284 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.543 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.544 [2024-05-13 02:53:42.212179] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.544 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.113 02:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.113 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:52.113 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.113 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:52.113 02:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:54.021 02:53:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:54.021 02:53:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:54.021 02:53:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.021 02:53:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:54.021 02:53:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.021 02:53:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:54.021 02:53:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.280 02:53:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.280 02:53:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:54.280 02:53:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:54.281 02:53:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.281 [2024-05-13 02:53:45.047025] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.281 02:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.219 02:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.219 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:55.219 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.219 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:55.219 02:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:57.160 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:57.160 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:57.160 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.160 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:57.160 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.160 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:57.160 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.160 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.161 [2024-05-13 02:53:47.830023] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.161 02:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.728 02:53:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.728 02:53:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:57.728 02:53:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.728 02:53:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:57.728 02:53:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:59.634 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:59.634 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:59.634 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.634 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:59.634 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.634 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:59.634 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 [2024-05-13 02:53:50.600871] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 [2024-05-13 02:53:50.648929] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.893 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 [2024-05-13 02:53:50.697137] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 [2024-05-13 02:53:50.745253] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 [2024-05-13 02:53:50.793425] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.152 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:00.153 "tick_rate": 2700000000, 00:13:00.153 "poll_groups": [ 00:13:00.153 { 00:13:00.153 "name": "nvmf_tgt_poll_group_000", 00:13:00.153 "admin_qpairs": 2, 00:13:00.153 "io_qpairs": 84, 00:13:00.153 "current_admin_qpairs": 0, 00:13:00.153 "current_io_qpairs": 0, 00:13:00.153 "pending_bdev_io": 0, 00:13:00.153 "completed_nvme_io": 134, 00:13:00.153 "transports": [ 00:13:00.153 { 00:13:00.153 "trtype": "TCP" 00:13:00.153 } 00:13:00.153 ] 00:13:00.153 }, 00:13:00.153 { 00:13:00.153 "name": "nvmf_tgt_poll_group_001", 00:13:00.153 "admin_qpairs": 2, 00:13:00.153 "io_qpairs": 84, 00:13:00.153 "current_admin_qpairs": 0, 00:13:00.153 "current_io_qpairs": 0, 00:13:00.153 "pending_bdev_io": 0, 00:13:00.153 "completed_nvme_io": 232, 00:13:00.153 "transports": [ 00:13:00.153 { 00:13:00.153 "trtype": "TCP" 00:13:00.153 } 00:13:00.153 ] 00:13:00.153 }, 00:13:00.153 { 00:13:00.153 "name": "nvmf_tgt_poll_group_002", 00:13:00.153 "admin_qpairs": 1, 00:13:00.153 "io_qpairs": 84, 00:13:00.153 "current_admin_qpairs": 0, 00:13:00.153 "current_io_qpairs": 0, 00:13:00.153 "pending_bdev_io": 0, 00:13:00.153 "completed_nvme_io": 183, 00:13:00.153 "transports": [ 00:13:00.153 { 00:13:00.153 "trtype": "TCP" 00:13:00.153 } 00:13:00.153 ] 00:13:00.153 }, 00:13:00.153 { 00:13:00.153 "name": "nvmf_tgt_poll_group_003", 00:13:00.153 "admin_qpairs": 2, 00:13:00.153 "io_qpairs": 84, 00:13:00.153 "current_admin_qpairs": 0, 00:13:00.153 "current_io_qpairs": 0, 00:13:00.153 "pending_bdev_io": 0, 00:13:00.153 "completed_nvme_io": 137, 00:13:00.153 "transports": [ 00:13:00.153 { 00:13:00.153 "trtype": "TCP" 00:13:00.153 } 00:13:00.153 ] 00:13:00.153 } 00:13:00.153 ] 00:13:00.153 }' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:00.153 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:00.153 rmmod nvme_tcp 00:13:00.153 rmmod nvme_fabrics 00:13:00.153 rmmod nvme_keyring 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 285871 ']' 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 285871 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 285871 ']' 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 285871 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 285871 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 285871' 00:13:00.413 killing process with pid 285871 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 285871 00:13:00.413 [2024-05-13 02:53:50.997808] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:00.413 02:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 285871 00:13:00.671 02:53:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:00.671 02:53:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:00.671 02:53:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:00.671 02:53:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.671 02:53:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.671 02:53:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.671 02:53:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.671 02:53:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.581 02:53:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:02.581 00:13:02.581 real 0m24.961s 00:13:02.581 user 1m21.208s 00:13:02.581 sys 0m3.867s 00:13:02.581 02:53:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.581 02:53:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.581 ************************************ 00:13:02.581 END TEST nvmf_rpc 00:13:02.581 ************************************ 00:13:02.581 02:53:53 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:02.581 02:53:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:02.581 02:53:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.581 02:53:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.581 ************************************ 00:13:02.581 START TEST nvmf_invalid 00:13:02.581 ************************************ 00:13:02.581 02:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:02.841 * Looking for test storage... 00:13:02.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:02.841 02:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:04.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:04.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:04.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:04.747 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:04.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:04.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:13:04.748 00:13:04.748 --- 10.0.0.2 ping statistics --- 00:13:04.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.748 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:13:04.748 00:13:04.748 --- 10.0.0.1 ping statistics --- 00:13:04.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.748 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=290359 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 290359 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 290359 ']' 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:04.748 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:04.748 [2024-05-13 02:53:55.507157] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:13:04.748 [2024-05-13 02:53:55.507244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.748 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.748 [2024-05-13 02:53:55.547931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:05.006 [2024-05-13 02:53:55.574368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.006 [2024-05-13 02:53:55.664511] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.006 [2024-05-13 02:53:55.664561] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.006 [2024-05-13 02:53:55.664590] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.006 [2024-05-13 02:53:55.664608] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.006 [2024-05-13 02:53:55.664618] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.006 [2024-05-13 02:53:55.664667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.006 [2024-05-13 02:53:55.664787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.006 [2024-05-13 02:53:55.664810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.006 [2024-05-13 02:53:55.664812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.006 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:05.006 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:13:05.006 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:05.006 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.006 02:53:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 02:53:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.264 02:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:05.264 02:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31576 00:13:05.521 [2024-05-13 02:53:56.090294] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:05.521 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:05.521 { 00:13:05.521 "nqn": "nqn.2016-06.io.spdk:cnode31576", 00:13:05.521 "tgt_name": "foobar", 00:13:05.521 "method": "nvmf_create_subsystem", 00:13:05.521 "req_id": 1 00:13:05.521 } 00:13:05.521 Got JSON-RPC error response 00:13:05.521 response: 00:13:05.521 { 00:13:05.521 "code": -32603, 00:13:05.521 "message": "Unable to find target foobar" 00:13:05.521 }' 00:13:05.521 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:05.521 { 00:13:05.521 "nqn": "nqn.2016-06.io.spdk:cnode31576", 00:13:05.521 "tgt_name": "foobar", 00:13:05.521 "method": "nvmf_create_subsystem", 00:13:05.521 "req_id": 1 00:13:05.521 } 00:13:05.521 Got JSON-RPC error response 00:13:05.521 response: 00:13:05.521 { 00:13:05.521 "code": -32603, 00:13:05.521 "message": "Unable to find target foobar" 00:13:05.521 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:05.521 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:05.521 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15988 00:13:05.779 [2024-05-13 02:53:56.355206] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15988: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:05.779 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:05.779 { 00:13:05.779 "nqn": "nqn.2016-06.io.spdk:cnode15988", 00:13:05.779 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:05.779 "method": "nvmf_create_subsystem", 00:13:05.779 "req_id": 1 00:13:05.779 } 00:13:05.779 Got JSON-RPC error response 00:13:05.779 response: 00:13:05.779 { 00:13:05.779 "code": -32602, 00:13:05.779 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:05.780 }' 00:13:05.780 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:05.780 { 00:13:05.780 "nqn": "nqn.2016-06.io.spdk:cnode15988", 00:13:05.780 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:05.780 "method": "nvmf_create_subsystem", 00:13:05.780 "req_id": 1 00:13:05.780 } 00:13:05.780 Got JSON-RPC error response 00:13:05.780 response: 00:13:05.780 { 00:13:05.780 "code": -32602, 00:13:05.780 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:05.780 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:05.780 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:05.780 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16885 00:13:06.038 [2024-05-13 02:53:56.616054] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16885: invalid model number 'SPDK_Controller' 00:13:06.038 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:06.038 { 00:13:06.038 "nqn": "nqn.2016-06.io.spdk:cnode16885", 00:13:06.038 "model_number": "SPDK_Controller\u001f", 00:13:06.038 "method": "nvmf_create_subsystem", 00:13:06.038 "req_id": 1 00:13:06.038 } 00:13:06.038 Got JSON-RPC error response 00:13:06.038 response: 00:13:06.038 { 00:13:06.038 "code": -32602, 00:13:06.038 "message": "Invalid MN SPDK_Controller\u001f" 00:13:06.038 }' 00:13:06.038 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:06.038 { 00:13:06.038 "nqn": "nqn.2016-06.io.spdk:cnode16885", 00:13:06.038 "model_number": "SPDK_Controller\u001f", 00:13:06.038 "method": "nvmf_create_subsystem", 00:13:06.038 "req_id": 1 00:13:06.038 } 00:13:06.038 Got JSON-RPC error response 00:13:06.038 response: 00:13:06.038 { 00:13:06.038 "code": -32602, 00:13:06.038 "message": "Invalid MN SPDK_Controller\u001f" 00:13:06.038 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:06.038 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:06.038 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Df)wYP~mr-RCVN>?pV%u' 00:13:06.039 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Df)wYP~mr-RCVN>?pV%u' nqn.2016-06.io.spdk:cnode1281 00:13:06.299 [2024-05-13 02:53:56.933145] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1281: invalid serial number 'Df)wYP~mr-RCVN>?pV%u' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:06.299 { 00:13:06.299 "nqn": "nqn.2016-06.io.spdk:cnode1281", 00:13:06.299 "serial_number": "Df)wYP~mr-RCVN\u007f>?pV%u", 00:13:06.299 "method": "nvmf_create_subsystem", 00:13:06.299 "req_id": 1 00:13:06.299 } 00:13:06.299 Got JSON-RPC error response 00:13:06.299 response: 00:13:06.299 { 00:13:06.299 "code": -32602, 00:13:06.299 "message": "Invalid SN Df)wYP~mr-RCVN\u007f>?pV%u" 00:13:06.299 }' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:06.299 { 00:13:06.299 "nqn": "nqn.2016-06.io.spdk:cnode1281", 00:13:06.299 "serial_number": "Df)wYP~mr-RCVN\u007f>?pV%u", 00:13:06.299 "method": "nvmf_create_subsystem", 00:13:06.299 "req_id": 1 00:13:06.299 } 00:13:06.299 Got JSON-RPC error response 00:13:06.299 response: 00:13:06.299 { 00:13:06.299 "code": -32602, 00:13:06.299 "message": "Invalid SN Df)wYP~mr-RCVN\u007f>?pV%u" 00:13:06.299 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:06.299 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:06.300 02:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.300 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:13:06.301 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Q*muk(kE}AVh Tz\iq'\''3s6h60]uJ1XXx./%TFyJJk' 00:13:06.560 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Q*muk(kE}AVh Tz\iq'\''3s6h60]uJ1XXx./%TFyJJk' nqn.2016-06.io.spdk:cnode2192 00:13:06.560 [2024-05-13 02:53:57.338428] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2192: invalid model number 'Q*muk(kE}AVh Tz\iq'3s6h60]uJ1XXx./%TFyJJk' 00:13:06.560 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:06.560 { 00:13:06.560 "nqn": "nqn.2016-06.io.spdk:cnode2192", 00:13:06.560 "model_number": "Q*muk(kE}AVh Tz\\iq'\''3s6h60]uJ1XXx./%TFyJJk", 00:13:06.560 "method": "nvmf_create_subsystem", 00:13:06.560 "req_id": 1 00:13:06.560 } 00:13:06.560 Got JSON-RPC error response 00:13:06.560 response: 00:13:06.560 { 00:13:06.560 "code": -32602, 00:13:06.560 "message": "Invalid MN Q*muk(kE}AVh Tz\\iq'\''3s6h60]uJ1XXx./%TFyJJk" 00:13:06.560 }' 00:13:06.560 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:06.560 { 00:13:06.560 "nqn": "nqn.2016-06.io.spdk:cnode2192", 00:13:06.560 "model_number": "Q*muk(kE}AVh Tz\\iq'3s6h60]uJ1XXx./%TFyJJk", 00:13:06.560 "method": "nvmf_create_subsystem", 00:13:06.560 "req_id": 1 00:13:06.560 } 00:13:06.560 Got JSON-RPC error response 00:13:06.561 response: 00:13:06.561 { 00:13:06.561 "code": -32602, 00:13:06.561 "message": "Invalid MN Q*muk(kE}AVh Tz\\iq'3s6h60]uJ1XXx./%TFyJJk" 00:13:06.561 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:06.561 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:06.819 [2024-05-13 02:53:57.587356] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.819 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:07.077 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:07.077 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:07.077 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:07.077 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:07.078 02:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:07.336 [2024-05-13 02:53:58.080922] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:07.336 [2024-05-13 02:53:58.081039] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:07.336 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:07.336 { 00:13:07.336 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:07.336 "listen_address": { 00:13:07.336 "trtype": "tcp", 00:13:07.336 "traddr": "", 00:13:07.336 "trsvcid": "4421" 00:13:07.336 }, 00:13:07.336 "method": "nvmf_subsystem_remove_listener", 00:13:07.336 "req_id": 1 00:13:07.336 } 00:13:07.336 Got JSON-RPC error response 00:13:07.336 response: 00:13:07.336 { 00:13:07.336 "code": -32602, 00:13:07.336 "message": "Invalid parameters" 00:13:07.336 }' 00:13:07.336 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:07.336 { 00:13:07.336 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:07.336 "listen_address": { 00:13:07.336 "trtype": "tcp", 00:13:07.336 "traddr": "", 00:13:07.336 "trsvcid": "4421" 00:13:07.336 }, 00:13:07.336 "method": "nvmf_subsystem_remove_listener", 00:13:07.336 "req_id": 1 00:13:07.336 } 00:13:07.336 Got JSON-RPC error response 00:13:07.336 response: 00:13:07.336 { 00:13:07.336 "code": -32602, 00:13:07.336 "message": "Invalid parameters" 00:13:07.336 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:07.336 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22526 -i 0 00:13:07.594 [2024-05-13 02:53:58.325739] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22526: invalid cntlid range [0-65519] 00:13:07.594 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:07.594 { 00:13:07.594 "nqn": "nqn.2016-06.io.spdk:cnode22526", 00:13:07.594 "min_cntlid": 0, 00:13:07.594 "method": "nvmf_create_subsystem", 00:13:07.594 "req_id": 1 00:13:07.594 } 00:13:07.594 Got JSON-RPC error response 00:13:07.594 response: 00:13:07.594 { 00:13:07.594 "code": -32602, 00:13:07.594 "message": "Invalid cntlid range [0-65519]" 00:13:07.594 }' 00:13:07.594 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:07.594 { 00:13:07.594 "nqn": "nqn.2016-06.io.spdk:cnode22526", 00:13:07.594 "min_cntlid": 0, 00:13:07.594 "method": "nvmf_create_subsystem", 00:13:07.594 "req_id": 1 00:13:07.594 } 00:13:07.594 Got JSON-RPC error response 00:13:07.594 response: 00:13:07.594 { 00:13:07.594 "code": -32602, 00:13:07.594 "message": "Invalid cntlid range [0-65519]" 00:13:07.594 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:07.594 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27947 -i 65520 00:13:07.852 [2024-05-13 02:53:58.582568] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27947: invalid cntlid range [65520-65519] 00:13:07.852 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:07.852 { 00:13:07.852 "nqn": "nqn.2016-06.io.spdk:cnode27947", 00:13:07.852 "min_cntlid": 65520, 00:13:07.852 "method": "nvmf_create_subsystem", 00:13:07.852 "req_id": 1 00:13:07.852 } 00:13:07.852 Got JSON-RPC error response 00:13:07.852 response: 00:13:07.852 { 00:13:07.852 "code": -32602, 00:13:07.852 "message": "Invalid cntlid range [65520-65519]" 00:13:07.852 }' 00:13:07.852 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:07.852 { 00:13:07.852 "nqn": "nqn.2016-06.io.spdk:cnode27947", 00:13:07.852 "min_cntlid": 65520, 00:13:07.852 "method": "nvmf_create_subsystem", 00:13:07.852 "req_id": 1 00:13:07.852 } 00:13:07.853 Got JSON-RPC error response 00:13:07.853 response: 00:13:07.853 { 00:13:07.853 "code": -32602, 00:13:07.853 "message": "Invalid cntlid range [65520-65519]" 00:13:07.853 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:07.853 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26435 -I 0 00:13:08.110 [2024-05-13 02:53:58.847544] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26435: invalid cntlid range [1-0] 00:13:08.110 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:08.110 { 00:13:08.110 "nqn": "nqn.2016-06.io.spdk:cnode26435", 00:13:08.110 "max_cntlid": 0, 00:13:08.110 "method": "nvmf_create_subsystem", 00:13:08.110 "req_id": 1 00:13:08.110 } 00:13:08.110 Got JSON-RPC error response 00:13:08.110 response: 00:13:08.110 { 00:13:08.110 "code": -32602, 00:13:08.110 "message": "Invalid cntlid range [1-0]" 00:13:08.110 }' 00:13:08.110 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:08.110 { 00:13:08.110 "nqn": "nqn.2016-06.io.spdk:cnode26435", 00:13:08.110 "max_cntlid": 0, 00:13:08.110 "method": "nvmf_create_subsystem", 00:13:08.110 "req_id": 1 00:13:08.110 } 00:13:08.110 Got JSON-RPC error response 00:13:08.110 response: 00:13:08.110 { 00:13:08.110 "code": -32602, 00:13:08.110 "message": "Invalid cntlid range [1-0]" 00:13:08.110 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:08.110 02:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16312 -I 65520 00:13:08.368 [2024-05-13 02:53:59.100344] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16312: invalid cntlid range [1-65520] 00:13:08.368 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:08.368 { 00:13:08.368 "nqn": "nqn.2016-06.io.spdk:cnode16312", 00:13:08.368 "max_cntlid": 65520, 00:13:08.368 "method": "nvmf_create_subsystem", 00:13:08.369 "req_id": 1 00:13:08.369 } 00:13:08.369 Got JSON-RPC error response 00:13:08.369 response: 00:13:08.369 { 00:13:08.369 "code": -32602, 00:13:08.369 "message": "Invalid cntlid range [1-65520]" 00:13:08.369 }' 00:13:08.369 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:08.369 { 00:13:08.369 "nqn": "nqn.2016-06.io.spdk:cnode16312", 00:13:08.369 "max_cntlid": 65520, 00:13:08.369 "method": "nvmf_create_subsystem", 00:13:08.369 "req_id": 1 00:13:08.369 } 00:13:08.369 Got JSON-RPC error response 00:13:08.369 response: 00:13:08.369 { 00:13:08.369 "code": -32602, 00:13:08.369 "message": "Invalid cntlid range [1-65520]" 00:13:08.369 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:08.369 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8332 -i 6 -I 5 00:13:08.627 [2024-05-13 02:53:59.345160] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8332: invalid cntlid range [6-5] 00:13:08.627 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:08.627 { 00:13:08.627 "nqn": "nqn.2016-06.io.spdk:cnode8332", 00:13:08.627 "min_cntlid": 6, 00:13:08.627 "max_cntlid": 5, 00:13:08.627 "method": "nvmf_create_subsystem", 00:13:08.627 "req_id": 1 00:13:08.627 } 00:13:08.627 Got JSON-RPC error response 00:13:08.627 response: 00:13:08.627 { 00:13:08.627 "code": -32602, 00:13:08.627 "message": "Invalid cntlid range [6-5]" 00:13:08.627 }' 00:13:08.627 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:08.627 { 00:13:08.627 "nqn": "nqn.2016-06.io.spdk:cnode8332", 00:13:08.627 "min_cntlid": 6, 00:13:08.627 "max_cntlid": 5, 00:13:08.627 "method": "nvmf_create_subsystem", 00:13:08.627 "req_id": 1 00:13:08.627 } 00:13:08.627 Got JSON-RPC error response 00:13:08.627 response: 00:13:08.627 { 00:13:08.627 "code": -32602, 00:13:08.627 "message": "Invalid cntlid range [6-5]" 00:13:08.627 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:08.627 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:08.887 { 00:13:08.887 "name": "foobar", 00:13:08.887 "method": "nvmf_delete_target", 00:13:08.887 "req_id": 1 00:13:08.887 } 00:13:08.887 Got JSON-RPC error response 00:13:08.887 response: 00:13:08.887 { 00:13:08.887 "code": -32602, 00:13:08.887 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:08.887 }' 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:08.887 { 00:13:08.887 "name": "foobar", 00:13:08.887 "method": "nvmf_delete_target", 00:13:08.887 "req_id": 1 00:13:08.887 } 00:13:08.887 Got JSON-RPC error response 00:13:08.887 response: 00:13:08.887 { 00:13:08.887 "code": -32602, 00:13:08.887 "message": "The specified target doesn't exist, cannot delete it." 00:13:08.887 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.887 rmmod nvme_tcp 00:13:08.887 rmmod nvme_fabrics 00:13:08.887 rmmod nvme_keyring 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 290359 ']' 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 290359 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 290359 ']' 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 290359 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 290359 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 290359' 00:13:08.887 killing process with pid 290359 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 290359 00:13:08.887 [2024-05-13 02:53:59.570422] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:08.887 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 290359 00:13:09.147 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:09.147 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:09.147 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:09.147 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.147 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:09.147 02:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.147 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.147 02:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.055 02:54:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.056 00:13:11.056 real 0m8.477s 00:13:11.056 user 0m20.108s 00:13:11.056 sys 0m2.273s 00:13:11.056 02:54:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:11.056 02:54:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.056 ************************************ 00:13:11.056 END TEST nvmf_invalid 00:13:11.056 ************************************ 00:13:11.315 02:54:01 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:11.315 02:54:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:11.315 02:54:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:11.315 02:54:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.315 ************************************ 00:13:11.315 START TEST nvmf_abort 00:13:11.315 ************************************ 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:11.315 * Looking for test storage... 00:13:11.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:11.315 02:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:13.266 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:13.266 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:13.266 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:13.266 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:13.266 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.267 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.525 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.525 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.525 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:13:13.526 00:13:13.526 --- 10.0.0.2 ping statistics --- 00:13:13.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.526 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:13:13.526 00:13:13.526 --- 10.0.0.1 ping statistics --- 00:13:13.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.526 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=292989 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 292989 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 292989 ']' 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:13.526 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.526 [2024-05-13 02:54:04.237006] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:13:13.526 [2024-05-13 02:54:04.237100] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.526 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.526 [2024-05-13 02:54:04.283759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:13.526 [2024-05-13 02:54:04.315737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.784 [2024-05-13 02:54:04.413669] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.784 [2024-05-13 02:54:04.413734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.784 [2024-05-13 02:54:04.413751] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.784 [2024-05-13 02:54:04.413765] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.784 [2024-05-13 02:54:04.413778] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.784 [2024-05-13 02:54:04.413863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.784 [2024-05-13 02:54:04.413930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.784 [2024-05-13 02:54:04.413933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.784 [2024-05-13 02:54:04.543257] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.784 Malloc0 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.784 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.044 Delay0 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.044 [2024-05-13 02:54:04.608283] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:14.044 [2024-05-13 02:54:04.608569] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.044 02:54:04 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:14.044 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.044 [2024-05-13 02:54:04.716351] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:16.580 Initializing NVMe Controllers 00:13:16.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:16.580 controller IO queue size 128 less than required 00:13:16.580 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:16.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:16.580 Initialization complete. Launching workers. 00:13:16.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32045 00:13:16.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32106, failed to submit 62 00:13:16.580 success 32049, unsuccess 57, failed 0 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.580 rmmod nvme_tcp 00:13:16.580 rmmod nvme_fabrics 00:13:16.580 rmmod nvme_keyring 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 292989 ']' 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 292989 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 292989 ']' 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 292989 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 292989 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 292989' 00:13:16.580 killing process with pid 292989 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 292989 00:13:16.580 [2024-05-13 02:54:06.978826] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:16.580 02:54:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 292989 00:13:16.580 02:54:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.580 02:54:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.580 02:54:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.580 02:54:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.580 02:54:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.580 02:54:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.580 02:54:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.580 02:54:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.486 02:54:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.486 00:13:18.486 real 0m7.384s 00:13:18.486 user 0m10.486s 00:13:18.486 sys 0m2.710s 00:13:18.486 02:54:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:18.486 02:54:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 ************************************ 00:13:18.486 END TEST nvmf_abort 00:13:18.486 ************************************ 00:13:18.745 02:54:09 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:18.745 02:54:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:18.745 02:54:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:18.745 02:54:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.745 ************************************ 00:13:18.745 START TEST nvmf_ns_hotplug_stress 00:13:18.745 ************************************ 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:18.745 * Looking for test storage... 00:13:18.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.745 02:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.660 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.660 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.660 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:20.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:20.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:20.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:20.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:20.661 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:20.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:13:20.924 00:13:20.924 --- 10.0.0.2 ping statistics --- 00:13:20.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.924 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:13:20.924 00:13:20.924 --- 10.0.0.1 ping statistics --- 00:13:20.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.924 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=295843 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 295843 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 295843 ']' 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:20.924 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.925 [2024-05-13 02:54:11.556293] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:13:20.925 [2024-05-13 02:54:11.556362] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.925 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.925 [2024-05-13 02:54:11.594548] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:20.925 [2024-05-13 02:54:11.622379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.925 [2024-05-13 02:54:11.707139] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.925 [2024-05-13 02:54:11.707188] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.925 [2024-05-13 02:54:11.707201] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.925 [2024-05-13 02:54:11.707212] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.925 [2024-05-13 02:54:11.707222] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.925 [2024-05-13 02:54:11.707311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.925 [2024-05-13 02:54:11.707373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.925 [2024-05-13 02:54:11.707375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.183 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:21.183 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:21.183 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.183 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.183 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.183 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.183 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:21.184 02:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:21.442 [2024-05-13 02:54:12.121324] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.442 02:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.699 02:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.957 [2024-05-13 02:54:12.667889] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:21.957 [2024-05-13 02:54:12.668130] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.957 02:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:22.215 02:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:22.473 Malloc0 00:13:22.473 02:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:22.732 Delay0 00:13:22.732 02:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.991 02:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:23.249 NULL1 00:13:23.249 02:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:23.508 02:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=296151 00:13:23.508 02:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:23.508 02:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:23.508 02:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.766 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.144 Read completed with error (sct=0, sc=11) 00:13:25.144 02:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.144 02:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:25.144 02:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:25.402 true 00:13:25.402 02:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:25.402 02:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.336 02:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.336 02:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:26.336 02:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:26.594 true 00:13:26.594 02:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:26.594 02:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.852 02:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.110 02:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:27.110 02:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:27.369 true 00:13:27.369 02:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:27.369 02:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.657 02:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.915 02:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:27.915 02:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:28.172 true 00:13:28.172 02:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:28.172 02:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.548 02:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.548 02:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:29.548 02:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:29.805 true 00:13:29.805 02:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:29.805 02:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.063 02:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.321 02:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:30.321 02:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:30.579 true 00:13:30.579 02:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:30.579 02:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.516 02:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.774 02:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:31.774 02:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:32.032 true 00:13:32.032 02:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:32.032 02:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.290 02:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.547 02:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:32.547 02:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:32.805 true 00:13:32.805 02:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:32.805 02:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.742 02:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.000 02:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:34.000 02:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:34.258 true 00:13:34.258 02:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:34.258 02:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.515 02:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.773 02:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:34.773 02:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:35.030 true 00:13:35.030 02:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:35.030 02:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.288 02:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.545 02:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:35.545 02:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:35.803 true 00:13:35.803 02:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:35.803 02:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.737 02:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.994 02:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:36.994 02:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:37.252 true 00:13:37.252 02:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:37.252 02:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.818 02:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.818 02:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:37.818 02:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:38.076 true 00:13:38.076 02:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:38.076 02:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.334 02:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.593 02:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:38.593 02:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:38.851 true 00:13:38.851 02:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:38.851 02:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.230 02:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.230 02:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:40.230 02:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:40.488 true 00:13:40.488 02:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:40.488 02:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.747 02:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.005 02:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:41.005 02:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:41.278 true 00:13:41.278 02:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:41.278 02:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.244 02:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.504 02:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:42.504 02:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:42.504 true 00:13:42.762 02:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:42.762 02:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.021 02:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.279 02:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:43.279 02:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:43.279 true 00:13:43.279 02:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:43.279 02:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.213 02:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.471 02:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:44.471 02:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:44.730 true 00:13:44.730 02:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:44.730 02:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.988 02:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.245 02:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:45.245 02:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:45.503 true 00:13:45.503 02:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:45.503 02:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.437 02:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.695 02:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:46.695 02:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:46.695 true 00:13:46.695 02:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:46.695 02:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.953 02:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.211 02:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:47.211 02:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:47.469 true 00:13:47.469 02:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:47.469 02:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.402 02:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.659 02:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:48.659 02:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:48.917 true 00:13:48.917 02:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:48.917 02:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.175 02:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.433 02:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:49.433 02:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:49.691 true 00:13:49.691 02:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:49.691 02:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.623 02:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.881 02:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:50.881 02:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:51.139 true 00:13:51.139 02:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:51.139 02:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.396 02:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.654 02:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:51.654 02:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:51.912 true 00:13:51.912 02:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:51.912 02:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.845 02:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.119 02:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:53.119 02:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:53.119 true 00:13:53.377 02:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:53.377 02:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.377 02:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.635 02:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:53.635 02:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:53.892 true 00:13:53.892 02:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:53.892 02:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.825 02:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.825 Initializing NVMe Controllers 00:13:54.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.825 Controller IO queue size 128, less than required. 00:13:54.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.825 Controller IO queue size 128, less than required. 00:13:54.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:54.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:54.825 Initialization complete. Launching workers. 00:13:54.825 ======================================================== 00:13:54.825 Latency(us) 00:13:54.825 Device Information : IOPS MiB/s Average min max 00:13:54.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 807.39 0.39 82786.54 2451.09 1093165.66 00:13:54.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10456.86 5.11 12240.67 3057.36 449995.13 00:13:54.825 ======================================================== 00:13:54.825 Total : 11264.25 5.50 17297.22 2451.09 1093165.66 00:13:54.825 00:13:55.111 02:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:55.111 02:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:55.370 true 00:13:55.370 02:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 296151 00:13:55.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (296151) - No such process 00:13:55.370 02:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 296151 00:13:55.370 02:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.628 02:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.886 02:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:55.886 02:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:55.886 02:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:55.886 02:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.886 02:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:56.144 null0 00:13:56.144 02:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.144 02:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.144 02:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:56.401 null1 00:13:56.401 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.401 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.401 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:56.659 null2 00:13:56.659 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.659 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.659 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:56.918 null3 00:13:56.918 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.918 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.918 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:57.176 null4 00:13:57.176 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.176 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.176 02:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:57.433 null5 00:13:57.433 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.433 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.433 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:57.691 null6 00:13:57.691 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.691 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.691 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:57.950 null7 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.950 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 300325 300326 300328 300330 300332 300334 300336 300338 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.951 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.209 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.209 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.209 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.209 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.209 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.209 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.209 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.209 02:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.467 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.468 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.468 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.468 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.468 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.468 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.468 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.468 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.468 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.468 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.726 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.726 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.726 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.726 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.726 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.726 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.726 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.726 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.984 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.243 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.243 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.243 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.243 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.243 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.243 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.243 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.243 02:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.501 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.502 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.502 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.502 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.502 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.502 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.502 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.760 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.760 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.760 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.760 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.760 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.760 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.760 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.760 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.018 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.276 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.277 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.277 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.277 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.277 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.277 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.277 02:54:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.277 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.535 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.794 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.794 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.794 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.794 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.794 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.794 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.794 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.794 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.051 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.052 02:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.310 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.310 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.310 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:01.310 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.310 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.310 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.310 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.310 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.568 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.827 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.827 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:01.827 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.827 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.827 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.827 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.827 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.827 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.085 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.344 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.344 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.344 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.344 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.344 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.344 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.344 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.344 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.344 02:54:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.344 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.344 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.344 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.603 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.603 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.603 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.603 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.603 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.861 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.862 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.120 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.120 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.120 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.121 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.121 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.121 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.121 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.121 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.378 02:54:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.378 rmmod nvme_tcp 00:14:03.378 rmmod nvme_fabrics 00:14:03.378 rmmod nvme_keyring 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 295843 ']' 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 295843 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 295843 ']' 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 295843 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 295843 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 295843' 00:14:03.378 killing process with pid 295843 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 295843 00:14:03.378 [2024-05-13 02:54:54.051779] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:03.378 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 295843 00:14:03.636 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.636 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.636 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.636 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.636 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.636 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.636 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.637 02:54:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.538 02:54:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:05.538 00:14:05.538 real 0m47.003s 00:14:05.538 user 3m33.918s 00:14:05.538 sys 0m16.552s 00:14:05.538 02:54:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:05.538 02:54:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.538 ************************************ 00:14:05.538 END TEST nvmf_ns_hotplug_stress 00:14:05.538 ************************************ 00:14:05.797 02:54:56 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:05.797 02:54:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:05.797 02:54:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:05.797 02:54:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:05.797 ************************************ 00:14:05.797 START TEST nvmf_connect_stress 00:14:05.797 ************************************ 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:05.797 * Looking for test storage... 00:14:05.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:05.797 02:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:07.699 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:07.699 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:07.699 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:07.699 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.699 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:14:07.958 00:14:07.958 --- 10.0.0.2 ping statistics --- 00:14:07.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.958 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:14:07.958 00:14:07.958 --- 10.0.0.1 ping statistics --- 00:14:07.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.958 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=303080 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 303080 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 303080 ']' 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:07.958 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.958 [2024-05-13 02:54:58.660893] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:14:07.958 [2024-05-13 02:54:58.660975] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.958 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.958 [2024-05-13 02:54:58.700542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:07.958 [2024-05-13 02:54:58.732956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:08.217 [2024-05-13 02:54:58.823930] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.217 [2024-05-13 02:54:58.824002] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.217 [2024-05-13 02:54:58.824018] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.217 [2024-05-13 02:54:58.824032] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.217 [2024-05-13 02:54:58.824044] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.217 [2024-05-13 02:54:58.824127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.217 [2024-05-13 02:54:58.824240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.217 [2024-05-13 02:54:58.824243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.217 [2024-05-13 02:54:58.950435] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.217 [2024-05-13 02:54:58.967353] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:08.217 [2024-05-13 02:54:58.974843] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.217 NULL1 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=303111 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.217 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.479 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.775 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.775 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:08.775 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.775 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.775 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.035 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.035 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:09.035 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.035 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.035 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.293 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:09.293 02:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.293 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.293 02:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.551 02:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.551 02:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:09.551 02:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.551 02:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.551 02:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.116 02:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.116 02:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:10.116 02:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.116 02:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.116 02:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.373 02:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.373 02:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:10.373 02:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.373 02:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.373 02:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.629 02:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.630 02:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:10.630 02:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.630 02:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.630 02:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.887 02:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.887 02:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:10.887 02:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.887 02:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.887 02:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.144 02:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.144 02:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:11.144 02:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.144 02:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.144 02:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.709 02:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.709 02:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:11.709 02:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.709 02:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.709 02:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.967 02:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.967 02:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:11.967 02:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.967 02:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.967 02:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.224 02:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.224 02:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:12.224 02:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.225 02:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.225 02:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.483 02:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.483 02:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:12.483 02:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.483 02:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.483 02:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.740 02:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.740 02:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:12.740 02:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.740 02:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.740 02:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.304 02:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.304 02:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:13.304 02:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.304 02:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.304 02:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.561 02:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.561 02:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:13.561 02:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.561 02:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.561 02:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.818 02:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.818 02:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:13.818 02:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.818 02:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.819 02:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.076 02:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.076 02:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:14.076 02:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.076 02:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.076 02:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.334 02:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.334 02:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:14.591 02:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.591 02:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.591 02:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.849 02:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.849 02:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:14.849 02:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.849 02:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.849 02:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.107 02:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.107 02:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:15.107 02:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.107 02:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.107 02:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.364 02:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.364 02:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:15.364 02:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.364 02:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.364 02:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.622 02:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.880 02:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:15.880 02:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.880 02:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.880 02:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.138 02:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.138 02:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:16.138 02:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.138 02:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.138 02:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.396 02:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.396 02:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:16.396 02:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.396 02:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.396 02:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.654 02:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.654 02:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:16.654 02:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.654 02:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.654 02:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.912 02:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.912 02:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:16.912 02:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.912 02:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.912 02:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.476 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.476 02:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:17.476 02:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.476 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.476 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.734 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.734 02:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:17.734 02:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.734 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.734 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.992 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.992 02:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:17.992 02:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.992 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.992 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.250 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.250 02:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:18.250 02:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.250 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.250 02:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.507 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 303111 00:14:18.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (303111) - No such process 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 303111 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.766 rmmod nvme_tcp 00:14:18.766 rmmod nvme_fabrics 00:14:18.766 rmmod nvme_keyring 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 303080 ']' 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 303080 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 303080 ']' 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 303080 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 303080 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 303080' 00:14:18.766 killing process with pid 303080 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 303080 00:14:18.766 [2024-05-13 02:55:09.390721] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:18.766 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 303080 00:14:19.025 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.025 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.025 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.025 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.025 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.025 02:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.025 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.025 02:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.927 02:55:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:20.927 00:14:20.927 real 0m15.265s 00:14:20.927 user 0m37.708s 00:14:20.927 sys 0m6.261s 00:14:20.927 02:55:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:20.927 02:55:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.927 ************************************ 00:14:20.927 END TEST nvmf_connect_stress 00:14:20.928 ************************************ 00:14:20.928 02:55:11 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:20.928 02:55:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:20.928 02:55:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:20.928 02:55:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.928 ************************************ 00:14:20.928 START TEST nvmf_fused_ordering 00:14:20.928 ************************************ 00:14:20.928 02:55:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:21.187 * Looking for test storage... 00:14:21.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.187 02:55:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.124 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:23.125 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:23.125 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:23.125 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:23.125 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:23.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:14:23.125 00:14:23.125 --- 10.0.0.2 ping statistics --- 00:14:23.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.125 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:14:23.125 00:14:23.125 --- 10.0.0.1 ping statistics --- 00:14:23.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.125 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=306255 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 306255 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 306255 ']' 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.125 02:55:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.383 [2024-05-13 02:55:13.959608] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:14:23.383 [2024-05-13 02:55:13.959710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.383 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.383 [2024-05-13 02:55:13.999112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:23.383 [2024-05-13 02:55:14.031710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.383 [2024-05-13 02:55:14.126791] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.383 [2024-05-13 02:55:14.126855] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.383 [2024-05-13 02:55:14.126882] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.383 [2024-05-13 02:55:14.126895] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.383 [2024-05-13 02:55:14.126907] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.383 [2024-05-13 02:55:14.126937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.641 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:23.641 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:23.641 02:55:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.641 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.641 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.641 02:55:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 [2024-05-13 02:55:14.263073] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 [2024-05-13 02:55:14.279013] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:23.642 [2024-05-13 02:55:14.279296] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 NULL1 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.642 02:55:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:23.642 [2024-05-13 02:55:14.323562] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:14:23.642 [2024-05-13 02:55:14.323601] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306406 ] 00:14:23.642 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.642 [2024-05-13 02:55:14.355793] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:24.579 Attached to nqn.2016-06.io.spdk:cnode1 00:14:24.579 Namespace ID: 1 size: 1GB 00:14:24.579 fused_ordering(0) 00:14:24.579 fused_ordering(1) 00:14:24.579 fused_ordering(2) 00:14:24.579 fused_ordering(3) 00:14:24.579 fused_ordering(4) 00:14:24.579 fused_ordering(5) 00:14:24.579 fused_ordering(6) 00:14:24.579 fused_ordering(7) 00:14:24.579 fused_ordering(8) 00:14:24.579 fused_ordering(9) 00:14:24.579 fused_ordering(10) 00:14:24.579 fused_ordering(11) 00:14:24.579 fused_ordering(12) 00:14:24.579 fused_ordering(13) 00:14:24.579 fused_ordering(14) 00:14:24.579 fused_ordering(15) 00:14:24.579 fused_ordering(16) 00:14:24.579 fused_ordering(17) 00:14:24.579 fused_ordering(18) 00:14:24.579 fused_ordering(19) 00:14:24.579 fused_ordering(20) 00:14:24.579 fused_ordering(21) 00:14:24.579 fused_ordering(22) 00:14:24.579 fused_ordering(23) 00:14:24.579 fused_ordering(24) 00:14:24.579 fused_ordering(25) 00:14:24.579 fused_ordering(26) 00:14:24.579 fused_ordering(27) 00:14:24.579 fused_ordering(28) 00:14:24.579 fused_ordering(29) 00:14:24.579 fused_ordering(30) 00:14:24.579 fused_ordering(31) 00:14:24.579 fused_ordering(32) 00:14:24.579 fused_ordering(33) 00:14:24.579 fused_ordering(34) 00:14:24.579 fused_ordering(35) 00:14:24.579 fused_ordering(36) 00:14:24.579 fused_ordering(37) 00:14:24.579 fused_ordering(38) 00:14:24.579 fused_ordering(39) 00:14:24.579 fused_ordering(40) 00:14:24.579 fused_ordering(41) 00:14:24.579 fused_ordering(42) 00:14:24.579 fused_ordering(43) 00:14:24.579 fused_ordering(44) 00:14:24.579 fused_ordering(45) 00:14:24.579 fused_ordering(46) 00:14:24.579 fused_ordering(47) 00:14:24.579 fused_ordering(48) 00:14:24.579 fused_ordering(49) 00:14:24.579 fused_ordering(50) 00:14:24.579 fused_ordering(51) 00:14:24.579 fused_ordering(52) 00:14:24.579 fused_ordering(53) 00:14:24.579 fused_ordering(54) 00:14:24.579 fused_ordering(55) 00:14:24.579 fused_ordering(56) 00:14:24.579 fused_ordering(57) 00:14:24.579 fused_ordering(58) 00:14:24.579 fused_ordering(59) 00:14:24.579 fused_ordering(60) 00:14:24.579 fused_ordering(61) 00:14:24.579 fused_ordering(62) 00:14:24.579 fused_ordering(63) 00:14:24.579 fused_ordering(64) 00:14:24.579 fused_ordering(65) 00:14:24.579 fused_ordering(66) 00:14:24.579 fused_ordering(67) 00:14:24.579 fused_ordering(68) 00:14:24.579 fused_ordering(69) 00:14:24.579 fused_ordering(70) 00:14:24.579 fused_ordering(71) 00:14:24.579 fused_ordering(72) 00:14:24.579 fused_ordering(73) 00:14:24.579 fused_ordering(74) 00:14:24.579 fused_ordering(75) 00:14:24.579 fused_ordering(76) 00:14:24.579 fused_ordering(77) 00:14:24.579 fused_ordering(78) 00:14:24.579 fused_ordering(79) 00:14:24.579 fused_ordering(80) 00:14:24.579 fused_ordering(81) 00:14:24.579 fused_ordering(82) 00:14:24.579 fused_ordering(83) 00:14:24.579 fused_ordering(84) 00:14:24.579 fused_ordering(85) 00:14:24.579 fused_ordering(86) 00:14:24.579 fused_ordering(87) 00:14:24.579 fused_ordering(88) 00:14:24.579 fused_ordering(89) 00:14:24.579 fused_ordering(90) 00:14:24.579 fused_ordering(91) 00:14:24.579 fused_ordering(92) 00:14:24.579 fused_ordering(93) 00:14:24.579 fused_ordering(94) 00:14:24.579 fused_ordering(95) 00:14:24.579 fused_ordering(96) 00:14:24.579 fused_ordering(97) 00:14:24.579 fused_ordering(98) 00:14:24.579 fused_ordering(99) 00:14:24.579 fused_ordering(100) 00:14:24.579 fused_ordering(101) 00:14:24.579 fused_ordering(102) 00:14:24.579 fused_ordering(103) 00:14:24.579 fused_ordering(104) 00:14:24.579 fused_ordering(105) 00:14:24.579 fused_ordering(106) 00:14:24.579 fused_ordering(107) 00:14:24.579 fused_ordering(108) 00:14:24.579 fused_ordering(109) 00:14:24.579 fused_ordering(110) 00:14:24.579 fused_ordering(111) 00:14:24.579 fused_ordering(112) 00:14:24.579 fused_ordering(113) 00:14:24.579 fused_ordering(114) 00:14:24.579 fused_ordering(115) 00:14:24.579 fused_ordering(116) 00:14:24.579 fused_ordering(117) 00:14:24.579 fused_ordering(118) 00:14:24.579 fused_ordering(119) 00:14:24.579 fused_ordering(120) 00:14:24.579 fused_ordering(121) 00:14:24.579 fused_ordering(122) 00:14:24.579 fused_ordering(123) 00:14:24.579 fused_ordering(124) 00:14:24.579 fused_ordering(125) 00:14:24.579 fused_ordering(126) 00:14:24.579 fused_ordering(127) 00:14:24.579 fused_ordering(128) 00:14:24.579 fused_ordering(129) 00:14:24.579 fused_ordering(130) 00:14:24.579 fused_ordering(131) 00:14:24.579 fused_ordering(132) 00:14:24.579 fused_ordering(133) 00:14:24.579 fused_ordering(134) 00:14:24.579 fused_ordering(135) 00:14:24.579 fused_ordering(136) 00:14:24.579 fused_ordering(137) 00:14:24.579 fused_ordering(138) 00:14:24.579 fused_ordering(139) 00:14:24.579 fused_ordering(140) 00:14:24.579 fused_ordering(141) 00:14:24.579 fused_ordering(142) 00:14:24.579 fused_ordering(143) 00:14:24.579 fused_ordering(144) 00:14:24.579 fused_ordering(145) 00:14:24.579 fused_ordering(146) 00:14:24.579 fused_ordering(147) 00:14:24.579 fused_ordering(148) 00:14:24.579 fused_ordering(149) 00:14:24.579 fused_ordering(150) 00:14:24.579 fused_ordering(151) 00:14:24.579 fused_ordering(152) 00:14:24.579 fused_ordering(153) 00:14:24.579 fused_ordering(154) 00:14:24.579 fused_ordering(155) 00:14:24.579 fused_ordering(156) 00:14:24.579 fused_ordering(157) 00:14:24.579 fused_ordering(158) 00:14:24.579 fused_ordering(159) 00:14:24.579 fused_ordering(160) 00:14:24.579 fused_ordering(161) 00:14:24.579 fused_ordering(162) 00:14:24.579 fused_ordering(163) 00:14:24.579 fused_ordering(164) 00:14:24.579 fused_ordering(165) 00:14:24.579 fused_ordering(166) 00:14:24.579 fused_ordering(167) 00:14:24.579 fused_ordering(168) 00:14:24.579 fused_ordering(169) 00:14:24.579 fused_ordering(170) 00:14:24.579 fused_ordering(171) 00:14:24.579 fused_ordering(172) 00:14:24.579 fused_ordering(173) 00:14:24.579 fused_ordering(174) 00:14:24.579 fused_ordering(175) 00:14:24.579 fused_ordering(176) 00:14:24.579 fused_ordering(177) 00:14:24.579 fused_ordering(178) 00:14:24.579 fused_ordering(179) 00:14:24.579 fused_ordering(180) 00:14:24.579 fused_ordering(181) 00:14:24.579 fused_ordering(182) 00:14:24.579 fused_ordering(183) 00:14:24.579 fused_ordering(184) 00:14:24.579 fused_ordering(185) 00:14:24.579 fused_ordering(186) 00:14:24.579 fused_ordering(187) 00:14:24.579 fused_ordering(188) 00:14:24.579 fused_ordering(189) 00:14:24.579 fused_ordering(190) 00:14:24.579 fused_ordering(191) 00:14:24.579 fused_ordering(192) 00:14:24.579 fused_ordering(193) 00:14:24.579 fused_ordering(194) 00:14:24.579 fused_ordering(195) 00:14:24.579 fused_ordering(196) 00:14:24.579 fused_ordering(197) 00:14:24.579 fused_ordering(198) 00:14:24.579 fused_ordering(199) 00:14:24.579 fused_ordering(200) 00:14:24.579 fused_ordering(201) 00:14:24.579 fused_ordering(202) 00:14:24.579 fused_ordering(203) 00:14:24.579 fused_ordering(204) 00:14:24.579 fused_ordering(205) 00:14:25.517 fused_ordering(206) 00:14:25.517 fused_ordering(207) 00:14:25.517 fused_ordering(208) 00:14:25.517 fused_ordering(209) 00:14:25.517 fused_ordering(210) 00:14:25.517 fused_ordering(211) 00:14:25.517 fused_ordering(212) 00:14:25.517 fused_ordering(213) 00:14:25.517 fused_ordering(214) 00:14:25.517 fused_ordering(215) 00:14:25.517 fused_ordering(216) 00:14:25.517 fused_ordering(217) 00:14:25.517 fused_ordering(218) 00:14:25.517 fused_ordering(219) 00:14:25.517 fused_ordering(220) 00:14:25.517 fused_ordering(221) 00:14:25.517 fused_ordering(222) 00:14:25.517 fused_ordering(223) 00:14:25.517 fused_ordering(224) 00:14:25.517 fused_ordering(225) 00:14:25.517 fused_ordering(226) 00:14:25.517 fused_ordering(227) 00:14:25.517 fused_ordering(228) 00:14:25.517 fused_ordering(229) 00:14:25.517 fused_ordering(230) 00:14:25.517 fused_ordering(231) 00:14:25.517 fused_ordering(232) 00:14:25.517 fused_ordering(233) 00:14:25.517 fused_ordering(234) 00:14:25.517 fused_ordering(235) 00:14:25.517 fused_ordering(236) 00:14:25.517 fused_ordering(237) 00:14:25.517 fused_ordering(238) 00:14:25.517 fused_ordering(239) 00:14:25.517 fused_ordering(240) 00:14:25.517 fused_ordering(241) 00:14:25.517 fused_ordering(242) 00:14:25.517 fused_ordering(243) 00:14:25.517 fused_ordering(244) 00:14:25.517 fused_ordering(245) 00:14:25.517 fused_ordering(246) 00:14:25.517 fused_ordering(247) 00:14:25.517 fused_ordering(248) 00:14:25.517 fused_ordering(249) 00:14:25.517 fused_ordering(250) 00:14:25.517 fused_ordering(251) 00:14:25.517 fused_ordering(252) 00:14:25.517 fused_ordering(253) 00:14:25.517 fused_ordering(254) 00:14:25.517 fused_ordering(255) 00:14:25.517 fused_ordering(256) 00:14:25.517 fused_ordering(257) 00:14:25.517 fused_ordering(258) 00:14:25.517 fused_ordering(259) 00:14:25.517 fused_ordering(260) 00:14:25.517 fused_ordering(261) 00:14:25.517 fused_ordering(262) 00:14:25.517 fused_ordering(263) 00:14:25.517 fused_ordering(264) 00:14:25.517 fused_ordering(265) 00:14:25.517 fused_ordering(266) 00:14:25.517 fused_ordering(267) 00:14:25.517 fused_ordering(268) 00:14:25.517 fused_ordering(269) 00:14:25.517 fused_ordering(270) 00:14:25.517 fused_ordering(271) 00:14:25.517 fused_ordering(272) 00:14:25.517 fused_ordering(273) 00:14:25.517 fused_ordering(274) 00:14:25.517 fused_ordering(275) 00:14:25.517 fused_ordering(276) 00:14:25.517 fused_ordering(277) 00:14:25.517 fused_ordering(278) 00:14:25.517 fused_ordering(279) 00:14:25.517 fused_ordering(280) 00:14:25.517 fused_ordering(281) 00:14:25.517 fused_ordering(282) 00:14:25.517 fused_ordering(283) 00:14:25.517 fused_ordering(284) 00:14:25.517 fused_ordering(285) 00:14:25.517 fused_ordering(286) 00:14:25.517 fused_ordering(287) 00:14:25.517 fused_ordering(288) 00:14:25.518 fused_ordering(289) 00:14:25.518 fused_ordering(290) 00:14:25.518 fused_ordering(291) 00:14:25.518 fused_ordering(292) 00:14:25.518 fused_ordering(293) 00:14:25.518 fused_ordering(294) 00:14:25.518 fused_ordering(295) 00:14:25.518 fused_ordering(296) 00:14:25.518 fused_ordering(297) 00:14:25.518 fused_ordering(298) 00:14:25.518 fused_ordering(299) 00:14:25.518 fused_ordering(300) 00:14:25.518 fused_ordering(301) 00:14:25.518 fused_ordering(302) 00:14:25.518 fused_ordering(303) 00:14:25.518 fused_ordering(304) 00:14:25.518 fused_ordering(305) 00:14:25.518 fused_ordering(306) 00:14:25.518 fused_ordering(307) 00:14:25.518 fused_ordering(308) 00:14:25.518 fused_ordering(309) 00:14:25.518 fused_ordering(310) 00:14:25.518 fused_ordering(311) 00:14:25.518 fused_ordering(312) 00:14:25.518 fused_ordering(313) 00:14:25.518 fused_ordering(314) 00:14:25.518 fused_ordering(315) 00:14:25.518 fused_ordering(316) 00:14:25.518 fused_ordering(317) 00:14:25.518 fused_ordering(318) 00:14:25.518 fused_ordering(319) 00:14:25.518 fused_ordering(320) 00:14:25.518 fused_ordering(321) 00:14:25.518 fused_ordering(322) 00:14:25.518 fused_ordering(323) 00:14:25.518 fused_ordering(324) 00:14:25.518 fused_ordering(325) 00:14:25.518 fused_ordering(326) 00:14:25.518 fused_ordering(327) 00:14:25.518 fused_ordering(328) 00:14:25.518 fused_ordering(329) 00:14:25.518 fused_ordering(330) 00:14:25.518 fused_ordering(331) 00:14:25.518 fused_ordering(332) 00:14:25.518 fused_ordering(333) 00:14:25.518 fused_ordering(334) 00:14:25.518 fused_ordering(335) 00:14:25.518 fused_ordering(336) 00:14:25.518 fused_ordering(337) 00:14:25.518 fused_ordering(338) 00:14:25.518 fused_ordering(339) 00:14:25.518 fused_ordering(340) 00:14:25.518 fused_ordering(341) 00:14:25.518 fused_ordering(342) 00:14:25.518 fused_ordering(343) 00:14:25.518 fused_ordering(344) 00:14:25.518 fused_ordering(345) 00:14:25.518 fused_ordering(346) 00:14:25.518 fused_ordering(347) 00:14:25.518 fused_ordering(348) 00:14:25.518 fused_ordering(349) 00:14:25.518 fused_ordering(350) 00:14:25.518 fused_ordering(351) 00:14:25.518 fused_ordering(352) 00:14:25.518 fused_ordering(353) 00:14:25.518 fused_ordering(354) 00:14:25.518 fused_ordering(355) 00:14:25.518 fused_ordering(356) 00:14:25.518 fused_ordering(357) 00:14:25.518 fused_ordering(358) 00:14:25.518 fused_ordering(359) 00:14:25.518 fused_ordering(360) 00:14:25.518 fused_ordering(361) 00:14:25.518 fused_ordering(362) 00:14:25.518 fused_ordering(363) 00:14:25.518 fused_ordering(364) 00:14:25.518 fused_ordering(365) 00:14:25.518 fused_ordering(366) 00:14:25.518 fused_ordering(367) 00:14:25.518 fused_ordering(368) 00:14:25.518 fused_ordering(369) 00:14:25.518 fused_ordering(370) 00:14:25.518 fused_ordering(371) 00:14:25.518 fused_ordering(372) 00:14:25.518 fused_ordering(373) 00:14:25.518 fused_ordering(374) 00:14:25.518 fused_ordering(375) 00:14:25.518 fused_ordering(376) 00:14:25.518 fused_ordering(377) 00:14:25.518 fused_ordering(378) 00:14:25.518 fused_ordering(379) 00:14:25.518 fused_ordering(380) 00:14:25.518 fused_ordering(381) 00:14:25.518 fused_ordering(382) 00:14:25.518 fused_ordering(383) 00:14:25.518 fused_ordering(384) 00:14:25.518 fused_ordering(385) 00:14:25.518 fused_ordering(386) 00:14:25.518 fused_ordering(387) 00:14:25.518 fused_ordering(388) 00:14:25.518 fused_ordering(389) 00:14:25.518 fused_ordering(390) 00:14:25.518 fused_ordering(391) 00:14:25.518 fused_ordering(392) 00:14:25.518 fused_ordering(393) 00:14:25.518 fused_ordering(394) 00:14:25.518 fused_ordering(395) 00:14:25.518 fused_ordering(396) 00:14:25.518 fused_ordering(397) 00:14:25.518 fused_ordering(398) 00:14:25.518 fused_ordering(399) 00:14:25.518 fused_ordering(400) 00:14:25.518 fused_ordering(401) 00:14:25.518 fused_ordering(402) 00:14:25.518 fused_ordering(403) 00:14:25.518 fused_ordering(404) 00:14:25.518 fused_ordering(405) 00:14:25.518 fused_ordering(406) 00:14:25.518 fused_ordering(407) 00:14:25.518 fused_ordering(408) 00:14:25.518 fused_ordering(409) 00:14:25.518 fused_ordering(410) 00:14:26.452 fused_ordering(411) 00:14:26.452 fused_ordering(412) 00:14:26.452 fused_ordering(413) 00:14:26.452 fused_ordering(414) 00:14:26.452 fused_ordering(415) 00:14:26.452 fused_ordering(416) 00:14:26.452 fused_ordering(417) 00:14:26.452 fused_ordering(418) 00:14:26.452 fused_ordering(419) 00:14:26.452 fused_ordering(420) 00:14:26.452 fused_ordering(421) 00:14:26.452 fused_ordering(422) 00:14:26.452 fused_ordering(423) 00:14:26.452 fused_ordering(424) 00:14:26.452 fused_ordering(425) 00:14:26.452 fused_ordering(426) 00:14:26.452 fused_ordering(427) 00:14:26.452 fused_ordering(428) 00:14:26.452 fused_ordering(429) 00:14:26.452 fused_ordering(430) 00:14:26.452 fused_ordering(431) 00:14:26.452 fused_ordering(432) 00:14:26.452 fused_ordering(433) 00:14:26.452 fused_ordering(434) 00:14:26.452 fused_ordering(435) 00:14:26.452 fused_ordering(436) 00:14:26.452 fused_ordering(437) 00:14:26.452 fused_ordering(438) 00:14:26.452 fused_ordering(439) 00:14:26.452 fused_ordering(440) 00:14:26.452 fused_ordering(441) 00:14:26.452 fused_ordering(442) 00:14:26.452 fused_ordering(443) 00:14:26.452 fused_ordering(444) 00:14:26.452 fused_ordering(445) 00:14:26.452 fused_ordering(446) 00:14:26.452 fused_ordering(447) 00:14:26.452 fused_ordering(448) 00:14:26.452 fused_ordering(449) 00:14:26.452 fused_ordering(450) 00:14:26.452 fused_ordering(451) 00:14:26.452 fused_ordering(452) 00:14:26.452 fused_ordering(453) 00:14:26.452 fused_ordering(454) 00:14:26.452 fused_ordering(455) 00:14:26.452 fused_ordering(456) 00:14:26.452 fused_ordering(457) 00:14:26.452 fused_ordering(458) 00:14:26.452 fused_ordering(459) 00:14:26.452 fused_ordering(460) 00:14:26.452 fused_ordering(461) 00:14:26.452 fused_ordering(462) 00:14:26.452 fused_ordering(463) 00:14:26.452 fused_ordering(464) 00:14:26.452 fused_ordering(465) 00:14:26.452 fused_ordering(466) 00:14:26.452 fused_ordering(467) 00:14:26.452 fused_ordering(468) 00:14:26.452 fused_ordering(469) 00:14:26.452 fused_ordering(470) 00:14:26.452 fused_ordering(471) 00:14:26.452 fused_ordering(472) 00:14:26.452 fused_ordering(473) 00:14:26.452 fused_ordering(474) 00:14:26.452 fused_ordering(475) 00:14:26.452 fused_ordering(476) 00:14:26.452 fused_ordering(477) 00:14:26.452 fused_ordering(478) 00:14:26.452 fused_ordering(479) 00:14:26.452 fused_ordering(480) 00:14:26.452 fused_ordering(481) 00:14:26.452 fused_ordering(482) 00:14:26.452 fused_ordering(483) 00:14:26.452 fused_ordering(484) 00:14:26.452 fused_ordering(485) 00:14:26.452 fused_ordering(486) 00:14:26.452 fused_ordering(487) 00:14:26.452 fused_ordering(488) 00:14:26.452 fused_ordering(489) 00:14:26.452 fused_ordering(490) 00:14:26.452 fused_ordering(491) 00:14:26.452 fused_ordering(492) 00:14:26.452 fused_ordering(493) 00:14:26.452 fused_ordering(494) 00:14:26.452 fused_ordering(495) 00:14:26.452 fused_ordering(496) 00:14:26.452 fused_ordering(497) 00:14:26.452 fused_ordering(498) 00:14:26.452 fused_ordering(499) 00:14:26.452 fused_ordering(500) 00:14:26.452 fused_ordering(501) 00:14:26.452 fused_ordering(502) 00:14:26.452 fused_ordering(503) 00:14:26.452 fused_ordering(504) 00:14:26.452 fused_ordering(505) 00:14:26.452 fused_ordering(506) 00:14:26.452 fused_ordering(507) 00:14:26.452 fused_ordering(508) 00:14:26.452 fused_ordering(509) 00:14:26.452 fused_ordering(510) 00:14:26.452 fused_ordering(511) 00:14:26.452 fused_ordering(512) 00:14:26.452 fused_ordering(513) 00:14:26.452 fused_ordering(514) 00:14:26.452 fused_ordering(515) 00:14:26.452 fused_ordering(516) 00:14:26.452 fused_ordering(517) 00:14:26.452 fused_ordering(518) 00:14:26.452 fused_ordering(519) 00:14:26.452 fused_ordering(520) 00:14:26.452 fused_ordering(521) 00:14:26.452 fused_ordering(522) 00:14:26.452 fused_ordering(523) 00:14:26.452 fused_ordering(524) 00:14:26.452 fused_ordering(525) 00:14:26.452 fused_ordering(526) 00:14:26.452 fused_ordering(527) 00:14:26.452 fused_ordering(528) 00:14:26.452 fused_ordering(529) 00:14:26.452 fused_ordering(530) 00:14:26.452 fused_ordering(531) 00:14:26.452 fused_ordering(532) 00:14:26.452 fused_ordering(533) 00:14:26.452 fused_ordering(534) 00:14:26.452 fused_ordering(535) 00:14:26.452 fused_ordering(536) 00:14:26.452 fused_ordering(537) 00:14:26.452 fused_ordering(538) 00:14:26.452 fused_ordering(539) 00:14:26.452 fused_ordering(540) 00:14:26.452 fused_ordering(541) 00:14:26.452 fused_ordering(542) 00:14:26.452 fused_ordering(543) 00:14:26.452 fused_ordering(544) 00:14:26.452 fused_ordering(545) 00:14:26.452 fused_ordering(546) 00:14:26.452 fused_ordering(547) 00:14:26.452 fused_ordering(548) 00:14:26.452 fused_ordering(549) 00:14:26.452 fused_ordering(550) 00:14:26.452 fused_ordering(551) 00:14:26.452 fused_ordering(552) 00:14:26.452 fused_ordering(553) 00:14:26.452 fused_ordering(554) 00:14:26.452 fused_ordering(555) 00:14:26.452 fused_ordering(556) 00:14:26.452 fused_ordering(557) 00:14:26.452 fused_ordering(558) 00:14:26.452 fused_ordering(559) 00:14:26.452 fused_ordering(560) 00:14:26.452 fused_ordering(561) 00:14:26.452 fused_ordering(562) 00:14:26.452 fused_ordering(563) 00:14:26.452 fused_ordering(564) 00:14:26.452 fused_ordering(565) 00:14:26.452 fused_ordering(566) 00:14:26.452 fused_ordering(567) 00:14:26.452 fused_ordering(568) 00:14:26.452 fused_ordering(569) 00:14:26.452 fused_ordering(570) 00:14:26.452 fused_ordering(571) 00:14:26.452 fused_ordering(572) 00:14:26.452 fused_ordering(573) 00:14:26.452 fused_ordering(574) 00:14:26.452 fused_ordering(575) 00:14:26.452 fused_ordering(576) 00:14:26.452 fused_ordering(577) 00:14:26.452 fused_ordering(578) 00:14:26.452 fused_ordering(579) 00:14:26.452 fused_ordering(580) 00:14:26.452 fused_ordering(581) 00:14:26.452 fused_ordering(582) 00:14:26.452 fused_ordering(583) 00:14:26.452 fused_ordering(584) 00:14:26.452 fused_ordering(585) 00:14:26.452 fused_ordering(586) 00:14:26.452 fused_ordering(587) 00:14:26.452 fused_ordering(588) 00:14:26.452 fused_ordering(589) 00:14:26.452 fused_ordering(590) 00:14:26.452 fused_ordering(591) 00:14:26.452 fused_ordering(592) 00:14:26.452 fused_ordering(593) 00:14:26.452 fused_ordering(594) 00:14:26.452 fused_ordering(595) 00:14:26.452 fused_ordering(596) 00:14:26.452 fused_ordering(597) 00:14:26.452 fused_ordering(598) 00:14:26.452 fused_ordering(599) 00:14:26.452 fused_ordering(600) 00:14:26.452 fused_ordering(601) 00:14:26.452 fused_ordering(602) 00:14:26.452 fused_ordering(603) 00:14:26.452 fused_ordering(604) 00:14:26.452 fused_ordering(605) 00:14:26.452 fused_ordering(606) 00:14:26.452 fused_ordering(607) 00:14:26.452 fused_ordering(608) 00:14:26.452 fused_ordering(609) 00:14:26.452 fused_ordering(610) 00:14:26.452 fused_ordering(611) 00:14:26.452 fused_ordering(612) 00:14:26.452 fused_ordering(613) 00:14:26.452 fused_ordering(614) 00:14:26.452 fused_ordering(615) 00:14:27.386 fused_ordering(616) 00:14:27.386 fused_ordering(617) 00:14:27.386 fused_ordering(618) 00:14:27.386 fused_ordering(619) 00:14:27.386 fused_ordering(620) 00:14:27.386 fused_ordering(621) 00:14:27.386 fused_ordering(622) 00:14:27.386 fused_ordering(623) 00:14:27.386 fused_ordering(624) 00:14:27.386 fused_ordering(625) 00:14:27.386 fused_ordering(626) 00:14:27.386 fused_ordering(627) 00:14:27.386 fused_ordering(628) 00:14:27.386 fused_ordering(629) 00:14:27.386 fused_ordering(630) 00:14:27.386 fused_ordering(631) 00:14:27.386 fused_ordering(632) 00:14:27.386 fused_ordering(633) 00:14:27.386 fused_ordering(634) 00:14:27.386 fused_ordering(635) 00:14:27.386 fused_ordering(636) 00:14:27.386 fused_ordering(637) 00:14:27.386 fused_ordering(638) 00:14:27.386 fused_ordering(639) 00:14:27.386 fused_ordering(640) 00:14:27.386 fused_ordering(641) 00:14:27.386 fused_ordering(642) 00:14:27.386 fused_ordering(643) 00:14:27.386 fused_ordering(644) 00:14:27.386 fused_ordering(645) 00:14:27.386 fused_ordering(646) 00:14:27.386 fused_ordering(647) 00:14:27.386 fused_ordering(648) 00:14:27.386 fused_ordering(649) 00:14:27.386 fused_ordering(650) 00:14:27.386 fused_ordering(651) 00:14:27.386 fused_ordering(652) 00:14:27.386 fused_ordering(653) 00:14:27.386 fused_ordering(654) 00:14:27.386 fused_ordering(655) 00:14:27.386 fused_ordering(656) 00:14:27.386 fused_ordering(657) 00:14:27.386 fused_ordering(658) 00:14:27.386 fused_ordering(659) 00:14:27.386 fused_ordering(660) 00:14:27.386 fused_ordering(661) 00:14:27.386 fused_ordering(662) 00:14:27.386 fused_ordering(663) 00:14:27.386 fused_ordering(664) 00:14:27.386 fused_ordering(665) 00:14:27.386 fused_ordering(666) 00:14:27.386 fused_ordering(667) 00:14:27.386 fused_ordering(668) 00:14:27.386 fused_ordering(669) 00:14:27.386 fused_ordering(670) 00:14:27.386 fused_ordering(671) 00:14:27.386 fused_ordering(672) 00:14:27.386 fused_ordering(673) 00:14:27.386 fused_ordering(674) 00:14:27.386 fused_ordering(675) 00:14:27.386 fused_ordering(676) 00:14:27.386 fused_ordering(677) 00:14:27.386 fused_ordering(678) 00:14:27.386 fused_ordering(679) 00:14:27.386 fused_ordering(680) 00:14:27.386 fused_ordering(681) 00:14:27.386 fused_ordering(682) 00:14:27.386 fused_ordering(683) 00:14:27.386 fused_ordering(684) 00:14:27.386 fused_ordering(685) 00:14:27.386 fused_ordering(686) 00:14:27.386 fused_ordering(687) 00:14:27.386 fused_ordering(688) 00:14:27.386 fused_ordering(689) 00:14:27.386 fused_ordering(690) 00:14:27.386 fused_ordering(691) 00:14:27.386 fused_ordering(692) 00:14:27.386 fused_ordering(693) 00:14:27.386 fused_ordering(694) 00:14:27.386 fused_ordering(695) 00:14:27.386 fused_ordering(696) 00:14:27.386 fused_ordering(697) 00:14:27.386 fused_ordering(698) 00:14:27.386 fused_ordering(699) 00:14:27.386 fused_ordering(700) 00:14:27.386 fused_ordering(701) 00:14:27.386 fused_ordering(702) 00:14:27.386 fused_ordering(703) 00:14:27.386 fused_ordering(704) 00:14:27.386 fused_ordering(705) 00:14:27.386 fused_ordering(706) 00:14:27.386 fused_ordering(707) 00:14:27.386 fused_ordering(708) 00:14:27.386 fused_ordering(709) 00:14:27.386 fused_ordering(710) 00:14:27.386 fused_ordering(711) 00:14:27.386 fused_ordering(712) 00:14:27.386 fused_ordering(713) 00:14:27.386 fused_ordering(714) 00:14:27.386 fused_ordering(715) 00:14:27.386 fused_ordering(716) 00:14:27.386 fused_ordering(717) 00:14:27.386 fused_ordering(718) 00:14:27.386 fused_ordering(719) 00:14:27.386 fused_ordering(720) 00:14:27.386 fused_ordering(721) 00:14:27.386 fused_ordering(722) 00:14:27.386 fused_ordering(723) 00:14:27.386 fused_ordering(724) 00:14:27.386 fused_ordering(725) 00:14:27.386 fused_ordering(726) 00:14:27.386 fused_ordering(727) 00:14:27.386 fused_ordering(728) 00:14:27.386 fused_ordering(729) 00:14:27.386 fused_ordering(730) 00:14:27.386 fused_ordering(731) 00:14:27.386 fused_ordering(732) 00:14:27.386 fused_ordering(733) 00:14:27.386 fused_ordering(734) 00:14:27.386 fused_ordering(735) 00:14:27.386 fused_ordering(736) 00:14:27.386 fused_ordering(737) 00:14:27.386 fused_ordering(738) 00:14:27.386 fused_ordering(739) 00:14:27.386 fused_ordering(740) 00:14:27.386 fused_ordering(741) 00:14:27.386 fused_ordering(742) 00:14:27.386 fused_ordering(743) 00:14:27.386 fused_ordering(744) 00:14:27.386 fused_ordering(745) 00:14:27.387 fused_ordering(746) 00:14:27.387 fused_ordering(747) 00:14:27.387 fused_ordering(748) 00:14:27.387 fused_ordering(749) 00:14:27.387 fused_ordering(750) 00:14:27.387 fused_ordering(751) 00:14:27.387 fused_ordering(752) 00:14:27.387 fused_ordering(753) 00:14:27.387 fused_ordering(754) 00:14:27.387 fused_ordering(755) 00:14:27.387 fused_ordering(756) 00:14:27.387 fused_ordering(757) 00:14:27.387 fused_ordering(758) 00:14:27.387 fused_ordering(759) 00:14:27.387 fused_ordering(760) 00:14:27.387 fused_ordering(761) 00:14:27.387 fused_ordering(762) 00:14:27.387 fused_ordering(763) 00:14:27.387 fused_ordering(764) 00:14:27.387 fused_ordering(765) 00:14:27.387 fused_ordering(766) 00:14:27.387 fused_ordering(767) 00:14:27.387 fused_ordering(768) 00:14:27.387 fused_ordering(769) 00:14:27.387 fused_ordering(770) 00:14:27.387 fused_ordering(771) 00:14:27.387 fused_ordering(772) 00:14:27.387 fused_ordering(773) 00:14:27.387 fused_ordering(774) 00:14:27.387 fused_ordering(775) 00:14:27.387 fused_ordering(776) 00:14:27.387 fused_ordering(777) 00:14:27.387 fused_ordering(778) 00:14:27.387 fused_ordering(779) 00:14:27.387 fused_ordering(780) 00:14:27.387 fused_ordering(781) 00:14:27.387 fused_ordering(782) 00:14:27.387 fused_ordering(783) 00:14:27.387 fused_ordering(784) 00:14:27.387 fused_ordering(785) 00:14:27.387 fused_ordering(786) 00:14:27.387 fused_ordering(787) 00:14:27.387 fused_ordering(788) 00:14:27.387 fused_ordering(789) 00:14:27.387 fused_ordering(790) 00:14:27.387 fused_ordering(791) 00:14:27.387 fused_ordering(792) 00:14:27.387 fused_ordering(793) 00:14:27.387 fused_ordering(794) 00:14:27.387 fused_ordering(795) 00:14:27.387 fused_ordering(796) 00:14:27.387 fused_ordering(797) 00:14:27.387 fused_ordering(798) 00:14:27.387 fused_ordering(799) 00:14:27.387 fused_ordering(800) 00:14:27.387 fused_ordering(801) 00:14:27.387 fused_ordering(802) 00:14:27.387 fused_ordering(803) 00:14:27.387 fused_ordering(804) 00:14:27.387 fused_ordering(805) 00:14:27.387 fused_ordering(806) 00:14:27.387 fused_ordering(807) 00:14:27.387 fused_ordering(808) 00:14:27.387 fused_ordering(809) 00:14:27.387 fused_ordering(810) 00:14:27.387 fused_ordering(811) 00:14:27.387 fused_ordering(812) 00:14:27.387 fused_ordering(813) 00:14:27.387 fused_ordering(814) 00:14:27.387 fused_ordering(815) 00:14:27.387 fused_ordering(816) 00:14:27.387 fused_ordering(817) 00:14:27.387 fused_ordering(818) 00:14:27.387 fused_ordering(819) 00:14:27.387 fused_ordering(820) 00:14:28.320 fused_ordering(821) 00:14:28.320 fused_ordering(822) 00:14:28.320 fused_ordering(823) 00:14:28.320 fused_ordering(824) 00:14:28.320 fused_ordering(825) 00:14:28.320 fused_ordering(826) 00:14:28.320 fused_ordering(827) 00:14:28.320 fused_ordering(828) 00:14:28.320 fused_ordering(829) 00:14:28.320 fused_ordering(830) 00:14:28.320 fused_ordering(831) 00:14:28.320 fused_ordering(832) 00:14:28.320 fused_ordering(833) 00:14:28.320 fused_ordering(834) 00:14:28.320 fused_ordering(835) 00:14:28.320 fused_ordering(836) 00:14:28.320 fused_ordering(837) 00:14:28.320 fused_ordering(838) 00:14:28.320 fused_ordering(839) 00:14:28.320 fused_ordering(840) 00:14:28.320 fused_ordering(841) 00:14:28.320 fused_ordering(842) 00:14:28.320 fused_ordering(843) 00:14:28.320 fused_ordering(844) 00:14:28.320 fused_ordering(845) 00:14:28.320 fused_ordering(846) 00:14:28.320 fused_ordering(847) 00:14:28.320 fused_ordering(848) 00:14:28.320 fused_ordering(849) 00:14:28.320 fused_ordering(850) 00:14:28.320 fused_ordering(851) 00:14:28.320 fused_ordering(852) 00:14:28.320 fused_ordering(853) 00:14:28.320 fused_ordering(854) 00:14:28.320 fused_ordering(855) 00:14:28.320 fused_ordering(856) 00:14:28.320 fused_ordering(857) 00:14:28.320 fused_ordering(858) 00:14:28.320 fused_ordering(859) 00:14:28.320 fused_ordering(860) 00:14:28.320 fused_ordering(861) 00:14:28.320 fused_ordering(862) 00:14:28.320 fused_ordering(863) 00:14:28.320 fused_ordering(864) 00:14:28.320 fused_ordering(865) 00:14:28.320 fused_ordering(866) 00:14:28.320 fused_ordering(867) 00:14:28.320 fused_ordering(868) 00:14:28.320 fused_ordering(869) 00:14:28.320 fused_ordering(870) 00:14:28.320 fused_ordering(871) 00:14:28.320 fused_ordering(872) 00:14:28.320 fused_ordering(873) 00:14:28.320 fused_ordering(874) 00:14:28.320 fused_ordering(875) 00:14:28.320 fused_ordering(876) 00:14:28.320 fused_ordering(877) 00:14:28.320 fused_ordering(878) 00:14:28.320 fused_ordering(879) 00:14:28.320 fused_ordering(880) 00:14:28.320 fused_ordering(881) 00:14:28.320 fused_ordering(882) 00:14:28.320 fused_ordering(883) 00:14:28.320 fused_ordering(884) 00:14:28.320 fused_ordering(885) 00:14:28.320 fused_ordering(886) 00:14:28.320 fused_ordering(887) 00:14:28.320 fused_ordering(888) 00:14:28.320 fused_ordering(889) 00:14:28.320 fused_ordering(890) 00:14:28.320 fused_ordering(891) 00:14:28.320 fused_ordering(892) 00:14:28.320 fused_ordering(893) 00:14:28.320 fused_ordering(894) 00:14:28.320 fused_ordering(895) 00:14:28.320 fused_ordering(896) 00:14:28.320 fused_ordering(897) 00:14:28.320 fused_ordering(898) 00:14:28.320 fused_ordering(899) 00:14:28.320 fused_ordering(900) 00:14:28.320 fused_ordering(901) 00:14:28.320 fused_ordering(902) 00:14:28.320 fused_ordering(903) 00:14:28.320 fused_ordering(904) 00:14:28.320 fused_ordering(905) 00:14:28.320 fused_ordering(906) 00:14:28.320 fused_ordering(907) 00:14:28.320 fused_ordering(908) 00:14:28.320 fused_ordering(909) 00:14:28.320 fused_ordering(910) 00:14:28.320 fused_ordering(911) 00:14:28.320 fused_ordering(912) 00:14:28.320 fused_ordering(913) 00:14:28.320 fused_ordering(914) 00:14:28.320 fused_ordering(915) 00:14:28.320 fused_ordering(916) 00:14:28.320 fused_ordering(917) 00:14:28.320 fused_ordering(918) 00:14:28.320 fused_ordering(919) 00:14:28.320 fused_ordering(920) 00:14:28.320 fused_ordering(921) 00:14:28.320 fused_ordering(922) 00:14:28.320 fused_ordering(923) 00:14:28.320 fused_ordering(924) 00:14:28.320 fused_ordering(925) 00:14:28.320 fused_ordering(926) 00:14:28.320 fused_ordering(927) 00:14:28.320 fused_ordering(928) 00:14:28.320 fused_ordering(929) 00:14:28.320 fused_ordering(930) 00:14:28.320 fused_ordering(931) 00:14:28.320 fused_ordering(932) 00:14:28.320 fused_ordering(933) 00:14:28.320 fused_ordering(934) 00:14:28.320 fused_ordering(935) 00:14:28.320 fused_ordering(936) 00:14:28.320 fused_ordering(937) 00:14:28.320 fused_ordering(938) 00:14:28.320 fused_ordering(939) 00:14:28.320 fused_ordering(940) 00:14:28.320 fused_ordering(941) 00:14:28.320 fused_ordering(942) 00:14:28.320 fused_ordering(943) 00:14:28.320 fused_ordering(944) 00:14:28.320 fused_ordering(945) 00:14:28.320 fused_ordering(946) 00:14:28.320 fused_ordering(947) 00:14:28.320 fused_ordering(948) 00:14:28.320 fused_ordering(949) 00:14:28.320 fused_ordering(950) 00:14:28.320 fused_ordering(951) 00:14:28.320 fused_ordering(952) 00:14:28.320 fused_ordering(953) 00:14:28.320 fused_ordering(954) 00:14:28.320 fused_ordering(955) 00:14:28.320 fused_ordering(956) 00:14:28.320 fused_ordering(957) 00:14:28.320 fused_ordering(958) 00:14:28.320 fused_ordering(959) 00:14:28.320 fused_ordering(960) 00:14:28.320 fused_ordering(961) 00:14:28.320 fused_ordering(962) 00:14:28.320 fused_ordering(963) 00:14:28.320 fused_ordering(964) 00:14:28.320 fused_ordering(965) 00:14:28.320 fused_ordering(966) 00:14:28.320 fused_ordering(967) 00:14:28.320 fused_ordering(968) 00:14:28.320 fused_ordering(969) 00:14:28.320 fused_ordering(970) 00:14:28.320 fused_ordering(971) 00:14:28.320 fused_ordering(972) 00:14:28.320 fused_ordering(973) 00:14:28.320 fused_ordering(974) 00:14:28.320 fused_ordering(975) 00:14:28.320 fused_ordering(976) 00:14:28.320 fused_ordering(977) 00:14:28.320 fused_ordering(978) 00:14:28.320 fused_ordering(979) 00:14:28.320 fused_ordering(980) 00:14:28.320 fused_ordering(981) 00:14:28.320 fused_ordering(982) 00:14:28.320 fused_ordering(983) 00:14:28.320 fused_ordering(984) 00:14:28.320 fused_ordering(985) 00:14:28.320 fused_ordering(986) 00:14:28.320 fused_ordering(987) 00:14:28.320 fused_ordering(988) 00:14:28.320 fused_ordering(989) 00:14:28.320 fused_ordering(990) 00:14:28.320 fused_ordering(991) 00:14:28.320 fused_ordering(992) 00:14:28.320 fused_ordering(993) 00:14:28.320 fused_ordering(994) 00:14:28.320 fused_ordering(995) 00:14:28.320 fused_ordering(996) 00:14:28.320 fused_ordering(997) 00:14:28.320 fused_ordering(998) 00:14:28.320 fused_ordering(999) 00:14:28.320 fused_ordering(1000) 00:14:28.320 fused_ordering(1001) 00:14:28.320 fused_ordering(1002) 00:14:28.320 fused_ordering(1003) 00:14:28.320 fused_ordering(1004) 00:14:28.320 fused_ordering(1005) 00:14:28.320 fused_ordering(1006) 00:14:28.320 fused_ordering(1007) 00:14:28.320 fused_ordering(1008) 00:14:28.320 fused_ordering(1009) 00:14:28.320 fused_ordering(1010) 00:14:28.320 fused_ordering(1011) 00:14:28.320 fused_ordering(1012) 00:14:28.320 fused_ordering(1013) 00:14:28.320 fused_ordering(1014) 00:14:28.320 fused_ordering(1015) 00:14:28.320 fused_ordering(1016) 00:14:28.320 fused_ordering(1017) 00:14:28.320 fused_ordering(1018) 00:14:28.320 fused_ordering(1019) 00:14:28.320 fused_ordering(1020) 00:14:28.320 fused_ordering(1021) 00:14:28.320 fused_ordering(1022) 00:14:28.320 fused_ordering(1023) 00:14:28.320 02:55:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:28.320 02:55:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:28.320 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:28.320 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:28.320 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.320 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:28.320 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.320 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.320 rmmod nvme_tcp 00:14:28.320 rmmod nvme_fabrics 00:14:28.579 rmmod nvme_keyring 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 306255 ']' 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 306255 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 306255 ']' 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 306255 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 306255 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 306255' 00:14:28.579 killing process with pid 306255 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 306255 00:14:28.579 [2024-05-13 02:55:19.177621] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:28.579 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 306255 00:14:28.837 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.837 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.837 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.837 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.837 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.837 02:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.837 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.837 02:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.738 02:55:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:30.738 00:14:30.738 real 0m9.743s 00:14:30.738 user 0m7.572s 00:14:30.738 sys 0m5.112s 00:14:30.738 02:55:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:30.738 02:55:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.738 ************************************ 00:14:30.738 END TEST nvmf_fused_ordering 00:14:30.738 ************************************ 00:14:30.738 02:55:21 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:30.738 02:55:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:30.738 02:55:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:30.738 02:55:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.738 ************************************ 00:14:30.738 START TEST nvmf_delete_subsystem 00:14:30.738 ************************************ 00:14:30.739 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:30.998 * Looking for test storage... 00:14:30.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.998 02:55:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:32.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:32.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:32.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:32.900 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:32.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:14:32.900 00:14:32.900 --- 10.0.0.2 ping statistics --- 00:14:32.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.900 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:14:32.900 00:14:32.900 --- 10.0.0.1 ping statistics --- 00:14:32.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.900 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:32.900 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=308866 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 308866 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 308866 ']' 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:32.901 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.158 [2024-05-13 02:55:23.702559] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:14:33.158 [2024-05-13 02:55:23.702634] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.158 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.158 [2024-05-13 02:55:23.741944] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:33.158 [2024-05-13 02:55:23.767562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.158 [2024-05-13 02:55:23.856339] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.158 [2024-05-13 02:55:23.856394] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.158 [2024-05-13 02:55:23.856407] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.158 [2024-05-13 02:55:23.856418] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.158 [2024-05-13 02:55:23.856428] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.158 [2024-05-13 02:55:23.856480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.158 [2024-05-13 02:55:23.856484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.416 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:33.416 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:33.416 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.416 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:33.416 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.416 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.416 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.416 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.416 02:55:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.416 [2024-05-13 02:55:24.002453] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.416 [2024-05-13 02:55:24.018418] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:33.416 [2024-05-13 02:55:24.018708] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.416 NULL1 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.416 Delay0 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=308886 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:33.416 02:55:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:33.416 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.416 [2024-05-13 02:55:24.093483] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:35.313 02:55:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.313 02:55:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.313 02:55:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 Write completed with error (sct=0, sc=8) 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 starting I/O failed: -6 00:14:35.571 Write completed with error (sct=0, sc=8) 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 Write completed with error (sct=0, sc=8) 00:14:35.571 starting I/O failed: -6 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 starting I/O failed: -6 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 Write completed with error (sct=0, sc=8) 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.571 starting I/O failed: -6 00:14:35.571 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 starting I/O failed: -6 00:14:35.572 starting I/O failed: -6 00:14:35.572 starting I/O failed: -6 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 starting I/O failed: -6 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 [2024-05-13 02:55:26.346202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f59a000c600 is same with the state(5) to be set 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Read completed with error (sct=0, sc=8) 00:14:35.572 Write completed with error (sct=0, sc=8) 00:14:36.944 [2024-05-13 02:55:27.315445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e85b0 is same with the state(5) to be set 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Read completed with error (sct=0, sc=8) 00:14:36.944 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 [2024-05-13 02:55:27.348734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 [2024-05-13 02:55:27.348880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f59a000c2f0 is same with the state(5) to be set 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 [2024-05-13 02:55:27.350071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ca0 is same with the state(5) to be set 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Write completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 Read completed with error (sct=0, sc=8) 00:14:36.945 [2024-05-13 02:55:27.350297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e8bd0 is same with the state(5) to be set 00:14:36.945 Initializing NVMe Controllers 00:14:36.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:36.945 Controller IO queue size 128, less than required. 00:14:36.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:36.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:36.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:36.945 Initialization complete. Launching workers. 00:14:36.945 ======================================================== 00:14:36.945 Latency(us) 00:14:36.945 Device Information : IOPS MiB/s Average min max 00:14:36.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.72 0.09 957726.17 572.99 1012506.54 00:14:36.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.49 0.07 909100.80 353.16 1013851.05 00:14:36.945 ======================================================== 00:14:36.945 Total : 335.21 0.16 936331.01 353.16 1013851.05 00:14:36.945 00:14:36.945 [2024-05-13 02:55:27.351119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e85b0 (9): Bad file descriptor 00:14:36.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:36.945 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.945 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:36.945 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 308886 00:14:36.945 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 308886 00:14:37.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (308886) - No such process 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 308886 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 308886 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 308886 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.203 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.204 [2024-05-13 02:55:27.875935] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=309409 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 309409 00:14:37.204 02:55:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:37.204 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.204 [2024-05-13 02:55:27.935266] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:37.769 02:55:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:37.769 02:55:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 309409 00:14:37.769 02:55:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.401 02:55:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.401 02:55:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 309409 00:14:38.401 02:55:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.659 02:55:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.659 02:55:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 309409 00:14:38.659 02:55:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:39.225 02:55:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:39.225 02:55:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 309409 00:14:39.225 02:55:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:39.790 02:55:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:39.790 02:55:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 309409 00:14:39.790 02:55:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.355 02:55:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.355 02:55:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 309409 00:14:40.355 02:55:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.355 Initializing NVMe Controllers 00:14:40.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.355 Controller IO queue size 128, less than required. 00:14:40.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:40.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:40.355 Initialization complete. Launching workers. 00:14:40.355 ======================================================== 00:14:40.355 Latency(us) 00:14:40.355 Device Information : IOPS MiB/s Average min max 00:14:40.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003842.48 1000285.24 1011077.10 00:14:40.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005013.95 1000549.35 1012886.21 00:14:40.355 ======================================================== 00:14:40.355 Total : 256.00 0.12 1004428.22 1000285.24 1012886.21 00:14:40.355 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 309409 00:14:40.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (309409) - No such process 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 309409 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.613 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.613 rmmod nvme_tcp 00:14:40.872 rmmod nvme_fabrics 00:14:40.872 rmmod nvme_keyring 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 308866 ']' 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 308866 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 308866 ']' 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 308866 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 308866 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 308866' 00:14:40.872 killing process with pid 308866 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 308866 00:14:40.872 [2024-05-13 02:55:31.504827] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:40.872 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 308866 00:14:41.131 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.131 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.131 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.131 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.131 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.131 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.131 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.131 02:55:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.038 02:55:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.038 00:14:43.038 real 0m12.257s 00:14:43.038 user 0m27.909s 00:14:43.038 sys 0m3.013s 00:14:43.038 02:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.038 02:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.038 ************************************ 00:14:43.038 END TEST nvmf_delete_subsystem 00:14:43.038 ************************************ 00:14:43.038 02:55:33 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:43.038 02:55:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:43.038 02:55:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.038 02:55:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.038 ************************************ 00:14:43.038 START TEST nvmf_ns_masking 00:14:43.038 ************************************ 00:14:43.038 02:55:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:43.297 * Looking for test storage... 00:14:43.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.297 02:55:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=f62da4e7-164f-4861-a6e1-15e89d142c80 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.298 02:55:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:45.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:45.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:45.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:45.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:45.199 02:55:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:45.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:14:45.458 00:14:45.458 --- 10.0.0.2 ping statistics --- 00:14:45.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.458 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:14:45.458 00:14:45.458 --- 10.0.0.1 ping statistics --- 00:14:45.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.458 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=311759 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 311759 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 311759 ']' 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:45.458 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.458 [2024-05-13 02:55:36.193614] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:14:45.458 [2024-05-13 02:55:36.193716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.458 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.458 [2024-05-13 02:55:36.234735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:45.717 [2024-05-13 02:55:36.266383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.717 [2024-05-13 02:55:36.361795] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.717 [2024-05-13 02:55:36.361850] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.717 [2024-05-13 02:55:36.361864] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.717 [2024-05-13 02:55:36.361882] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.717 [2024-05-13 02:55:36.361892] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.717 [2024-05-13 02:55:36.361949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.717 [2024-05-13 02:55:36.361995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.717 [2024-05-13 02:55:36.362121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.717 [2024-05-13 02:55:36.362124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.717 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:45.717 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:45.717 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.717 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.717 02:55:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.717 02:55:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.717 02:55:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:46.282 [2024-05-13 02:55:36.783423] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.282 02:55:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:46.282 02:55:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:46.282 02:55:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:46.540 Malloc1 00:14:46.540 02:55:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:46.798 Malloc2 00:14:46.798 02:55:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:47.055 02:55:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:47.312 02:55:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.312 [2024-05-13 02:55:38.104851] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:47.312 [2024-05-13 02:55:38.105156] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.570 02:55:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:47.570 02:55:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f62da4e7-164f-4861-a6e1-15e89d142c80 -a 10.0.0.2 -s 4420 -i 4 00:14:47.570 02:55:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:47.570 02:55:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:47.570 02:55:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.570 02:55:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:47.570 02:55:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:50.107 [ 0]:0x1 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=90a206b4b11d453fbd0e7be10718cf4b 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 90a206b4b11d453fbd0e7be10718cf4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:50.107 [ 0]:0x1 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=90a206b4b11d453fbd0e7be10718cf4b 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 90a206b4b11d453fbd0e7be10718cf4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:50.107 [ 1]:0x2 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c0b04856e9f54134a623a2cfdb457a6e 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c0b04856e9f54134a623a2cfdb457a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:50.107 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.364 02:55:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.621 02:55:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:50.878 02:55:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:50.878 02:55:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f62da4e7-164f-4861-a6e1-15e89d142c80 -a 10.0.0.2 -s 4420 -i 4 00:14:50.878 02:55:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:50.878 02:55:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:50.878 02:55:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.878 02:55:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:50.878 02:55:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:50.878 02:55:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:53.403 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:53.403 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:53.403 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.403 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:53.403 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:53.404 [ 0]:0x2 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c0b04856e9f54134a623a2cfdb457a6e 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c0b04856e9f54134a623a2cfdb457a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.404 02:55:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:53.404 [ 0]:0x1 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=90a206b4b11d453fbd0e7be10718cf4b 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 90a206b4b11d453fbd0e7be10718cf4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:53.404 [ 1]:0x2 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c0b04856e9f54134a623a2cfdb457a6e 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c0b04856e9f54134a623a2cfdb457a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.404 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.661 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:53.919 [ 0]:0x2 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c0b04856e9f54134a623a2cfdb457a6e 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c0b04856e9f54134a623a2cfdb457a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.919 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:54.193 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:54.193 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f62da4e7-164f-4861-a6e1-15e89d142c80 -a 10.0.0.2 -s 4420 -i 4 00:14:54.193 02:55:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:54.193 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:54.193 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.193 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:54.193 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:54.193 02:55:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:56.125 02:55:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.383 02:55:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:56.383 [ 0]:0x1 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=90a206b4b11d453fbd0e7be10718cf4b 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 90a206b4b11d453fbd0e7be10718cf4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:56.383 [ 1]:0x2 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c0b04856e9f54134a623a2cfdb457a6e 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c0b04856e9f54134a623a2cfdb457a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.383 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:56.949 [ 0]:0x2 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c0b04856e9f54134a623a2cfdb457a6e 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c0b04856e9f54134a623a2cfdb457a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:56.949 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:57.207 [2024-05-13 02:55:47.780226] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:57.207 request: 00:14:57.207 { 00:14:57.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.207 "nsid": 2, 00:14:57.207 "host": "nqn.2016-06.io.spdk:host1", 00:14:57.207 "method": "nvmf_ns_remove_host", 00:14:57.207 "req_id": 1 00:14:57.207 } 00:14:57.207 Got JSON-RPC error response 00:14:57.207 response: 00:14:57.207 { 00:14:57.207 "code": -32602, 00:14:57.207 "message": "Invalid parameters" 00:14:57.207 } 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:57.207 [ 0]:0x2 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c0b04856e9f54134a623a2cfdb457a6e 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c0b04856e9f54134a623a2cfdb457a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.207 02:55:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.465 02:55:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:57.465 02:55:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:57.465 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.465 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:57.465 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.465 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:57.465 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.465 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.465 rmmod nvme_tcp 00:14:57.465 rmmod nvme_fabrics 00:14:57.466 rmmod nvme_keyring 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 311759 ']' 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 311759 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 311759 ']' 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 311759 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 311759 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 311759' 00:14:57.466 killing process with pid 311759 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 311759 00:14:57.466 [2024-05-13 02:55:48.226953] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:57.466 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 311759 00:14:57.724 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.724 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.724 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.724 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.724 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.724 02:55:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.724 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.724 02:55:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.260 02:55:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:00.260 00:15:00.260 real 0m16.741s 00:15:00.260 user 0m51.852s 00:15:00.260 sys 0m3.818s 00:15:00.260 02:55:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:00.260 02:55:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.260 ************************************ 00:15:00.260 END TEST nvmf_ns_masking 00:15:00.260 ************************************ 00:15:00.260 02:55:50 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:00.260 02:55:50 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:00.260 02:55:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:00.260 02:55:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:00.260 02:55:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.260 ************************************ 00:15:00.260 START TEST nvmf_nvme_cli 00:15:00.260 ************************************ 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:00.260 * Looking for test storage... 00:15:00.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.260 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.261 02:55:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.261 02:55:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.261 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:00.261 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:00.261 02:55:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.261 02:55:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:02.163 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:02.164 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:02.164 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:02.164 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:02.164 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:15:02.164 00:15:02.164 --- 10.0.0.2 ping statistics --- 00:15:02.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.164 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:15:02.164 00:15:02.164 --- 10.0.0.1 ping statistics --- 00:15:02.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.164 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=315304 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 315304 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 315304 ']' 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:02.164 02:55:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.164 [2024-05-13 02:55:52.846121] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:15:02.164 [2024-05-13 02:55:52.846219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.164 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.164 [2024-05-13 02:55:52.884388] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:02.164 [2024-05-13 02:55:52.916189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.423 [2024-05-13 02:55:53.007588] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.423 [2024-05-13 02:55:53.007649] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.423 [2024-05-13 02:55:53.007673] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.423 [2024-05-13 02:55:53.007687] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.423 [2024-05-13 02:55:53.007708] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.423 [2024-05-13 02:55:53.007779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.423 [2024-05-13 02:55:53.007843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.423 [2024-05-13 02:55:53.007959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.423 [2024-05-13 02:55:53.007961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.423 [2024-05-13 02:55:53.165534] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.423 Malloc0 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.423 Malloc1 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.423 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.681 [2024-05-13 02:55:53.251930] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:02.681 [2024-05-13 02:55:53.252254] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:02.681 00:15:02.681 Discovery Log Number of Records 2, Generation counter 2 00:15:02.681 =====Discovery Log Entry 0====== 00:15:02.681 trtype: tcp 00:15:02.681 adrfam: ipv4 00:15:02.681 subtype: current discovery subsystem 00:15:02.681 treq: not required 00:15:02.681 portid: 0 00:15:02.681 trsvcid: 4420 00:15:02.681 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:02.681 traddr: 10.0.0.2 00:15:02.681 eflags: explicit discovery connections, duplicate discovery information 00:15:02.681 sectype: none 00:15:02.681 =====Discovery Log Entry 1====== 00:15:02.681 trtype: tcp 00:15:02.681 adrfam: ipv4 00:15:02.681 subtype: nvme subsystem 00:15:02.681 treq: not required 00:15:02.681 portid: 0 00:15:02.681 trsvcid: 4420 00:15:02.681 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:02.681 traddr: 10.0.0.2 00:15:02.681 eflags: none 00:15:02.681 sectype: none 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:02.681 02:55:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.246 02:55:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:03.246 02:55:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:15:03.246 02:55:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.246 02:55:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:03.246 02:55:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:03.246 02:55:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:05.773 /dev/nvme0n1 ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.773 rmmod nvme_tcp 00:15:05.773 rmmod nvme_fabrics 00:15:05.773 rmmod nvme_keyring 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 315304 ']' 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 315304 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 315304 ']' 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 315304 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 315304 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 315304' 00:15:05.773 killing process with pid 315304 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 315304 00:15:05.773 [2024-05-13 02:55:56.277921] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 315304 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.773 02:55:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.307 02:55:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:08.307 00:15:08.307 real 0m7.970s 00:15:08.307 user 0m14.506s 00:15:08.307 sys 0m2.077s 00:15:08.307 02:55:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:08.307 02:55:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.307 ************************************ 00:15:08.307 END TEST nvmf_nvme_cli 00:15:08.307 ************************************ 00:15:08.307 02:55:58 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:08.307 02:55:58 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:08.307 02:55:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:08.307 02:55:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:08.307 02:55:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:08.307 ************************************ 00:15:08.307 START TEST nvmf_vfio_user 00:15:08.307 ************************************ 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:08.307 * Looking for test storage... 00:15:08.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.307 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=316106 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 316106' 00:15:08.308 Process pid: 316106 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 316106 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 316106 ']' 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:08.308 02:55:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:08.308 [2024-05-13 02:55:58.762478] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:15:08.308 [2024-05-13 02:55:58.762568] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.308 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.308 [2024-05-13 02:55:58.797743] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:08.308 [2024-05-13 02:55:58.828833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.308 [2024-05-13 02:55:58.921100] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.308 [2024-05-13 02:55:58.921160] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.308 [2024-05-13 02:55:58.921186] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.308 [2024-05-13 02:55:58.921200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.308 [2024-05-13 02:55:58.921213] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.308 [2024-05-13 02:55:58.921286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.308 [2024-05-13 02:55:58.921338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.308 [2024-05-13 02:55:58.921453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.308 [2024-05-13 02:55:58.921456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.308 02:55:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:08.308 02:55:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:08.308 02:55:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:09.679 02:56:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:09.679 02:56:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:09.679 02:56:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:09.679 02:56:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:09.679 02:56:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:09.679 02:56:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:09.937 Malloc1 00:15:09.937 02:56:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:10.195 02:56:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:10.452 02:56:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:10.709 [2024-05-13 02:56:01.342567] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:10.709 02:56:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:10.709 02:56:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:10.709 02:56:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:10.967 Malloc2 00:15:10.967 02:56:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:11.224 02:56:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:11.515 02:56:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:11.775 02:56:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:11.775 02:56:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:11.775 02:56:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.775 02:56:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:11.775 02:56:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:11.775 02:56:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:11.775 [2024-05-13 02:56:02.399107] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:15:11.775 [2024-05-13 02:56:02.399145] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316526 ] 00:15:11.775 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.775 [2024-05-13 02:56:02.416345] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:11.775 [2024-05-13 02:56:02.433919] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:11.775 [2024-05-13 02:56:02.442218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.775 [2024-05-13 02:56:02.442245] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7eab236000 00:15:11.775 [2024-05-13 02:56:02.443213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.775 [2024-05-13 02:56:02.444205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.775 [2024-05-13 02:56:02.445208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.775 [2024-05-13 02:56:02.446215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.775 [2024-05-13 02:56:02.447223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.775 [2024-05-13 02:56:02.448226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.775 [2024-05-13 02:56:02.449229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.775 [2024-05-13 02:56:02.450238] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.775 [2024-05-13 02:56:02.451246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.775 [2024-05-13 02:56:02.451265] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7ea9fe7000 00:15:11.775 [2024-05-13 02:56:02.452378] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.775 [2024-05-13 02:56:02.472025] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:11.775 [2024-05-13 02:56:02.472063] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:11.775 [2024-05-13 02:56:02.474380] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:11.775 [2024-05-13 02:56:02.474436] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:11.775 [2024-05-13 02:56:02.474528] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:11.775 [2024-05-13 02:56:02.474559] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:11.775 [2024-05-13 02:56:02.474570] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:11.775 [2024-05-13 02:56:02.475371] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:11.775 [2024-05-13 02:56:02.475390] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:11.775 [2024-05-13 02:56:02.475401] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:11.775 [2024-05-13 02:56:02.476375] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:11.775 [2024-05-13 02:56:02.476394] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:11.775 [2024-05-13 02:56:02.476406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:11.775 [2024-05-13 02:56:02.477381] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:11.775 [2024-05-13 02:56:02.477398] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:11.775 [2024-05-13 02:56:02.478385] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:11.775 [2024-05-13 02:56:02.478407] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:11.775 [2024-05-13 02:56:02.478417] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:11.775 [2024-05-13 02:56:02.478428] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:11.775 [2024-05-13 02:56:02.478538] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:11.775 [2024-05-13 02:56:02.478545] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:11.775 [2024-05-13 02:56:02.478554] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:11.776 [2024-05-13 02:56:02.479392] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:11.776 [2024-05-13 02:56:02.480397] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:11.776 [2024-05-13 02:56:02.481404] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:11.776 [2024-05-13 02:56:02.482403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.776 [2024-05-13 02:56:02.482530] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:11.776 [2024-05-13 02:56:02.483416] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:11.776 [2024-05-13 02:56:02.483433] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:11.776 [2024-05-13 02:56:02.483441] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.483465] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:11.776 [2024-05-13 02:56:02.483478] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.483501] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.776 [2024-05-13 02:56:02.483511] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.776 [2024-05-13 02:56:02.483531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.483602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.483618] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:11.776 [2024-05-13 02:56:02.483626] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:11.776 [2024-05-13 02:56:02.483633] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:11.776 [2024-05-13 02:56:02.483640] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:11.776 [2024-05-13 02:56:02.483648] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:11.776 [2024-05-13 02:56:02.483655] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:11.776 [2024-05-13 02:56:02.483666] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.483709] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.483730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.483748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.483780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.776 [2024-05-13 02:56:02.483793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.776 [2024-05-13 02:56:02.483805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.776 [2024-05-13 02:56:02.483816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.776 [2024-05-13 02:56:02.483825] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.483840] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.483855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.483867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.483878] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:11.776 [2024-05-13 02:56:02.483887] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.483902] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.483913] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.483926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.483943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.484016] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484032] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484045] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:11.776 [2024-05-13 02:56:02.484053] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:11.776 [2024-05-13 02:56:02.484063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.484093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.484113] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:11.776 [2024-05-13 02:56:02.484137] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484151] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484162] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.776 [2024-05-13 02:56:02.484170] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.776 [2024-05-13 02:56:02.484179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.484200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.484221] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484235] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484245] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.776 [2024-05-13 02:56:02.484253] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.776 [2024-05-13 02:56:02.484262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.484273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.484287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484298] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484311] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484321] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484330] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484338] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:11.776 [2024-05-13 02:56:02.484345] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:11.776 [2024-05-13 02:56:02.484353] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:11.776 [2024-05-13 02:56:02.484381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.484398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.484416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.484427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.484442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.484456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.484471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.484482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:11.776 [2024-05-13 02:56:02.484499] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:11.776 [2024-05-13 02:56:02.484507] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:11.776 [2024-05-13 02:56:02.484513] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:11.776 [2024-05-13 02:56:02.484519] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:11.776 [2024-05-13 02:56:02.484528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:11.776 [2024-05-13 02:56:02.484539] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:11.776 [2024-05-13 02:56:02.484547] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:11.776 [2024-05-13 02:56:02.484555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:11.776 [2024-05-13 02:56:02.484565] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:11.776 [2024-05-13 02:56:02.484573] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.777 [2024-05-13 02:56:02.484581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.777 [2024-05-13 02:56:02.484593] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:11.777 [2024-05-13 02:56:02.484600] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:11.777 [2024-05-13 02:56:02.484609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:11.777 [2024-05-13 02:56:02.484620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:11.777 [2024-05-13 02:56:02.484639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:11.777 [2024-05-13 02:56:02.484654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:11.777 [2024-05-13 02:56:02.484669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:11.777 ===================================================== 00:15:11.777 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.777 ===================================================== 00:15:11.777 Controller Capabilities/Features 00:15:11.777 ================================ 00:15:11.777 Vendor ID: 4e58 00:15:11.777 Subsystem Vendor ID: 4e58 00:15:11.777 Serial Number: SPDK1 00:15:11.777 Model Number: SPDK bdev Controller 00:15:11.777 Firmware Version: 24.05 00:15:11.777 Recommended Arb Burst: 6 00:15:11.777 IEEE OUI Identifier: 8d 6b 50 00:15:11.777 Multi-path I/O 00:15:11.777 May have multiple subsystem ports: Yes 00:15:11.777 May have multiple controllers: Yes 00:15:11.777 Associated with SR-IOV VF: No 00:15:11.777 Max Data Transfer Size: 131072 00:15:11.777 Max Number of Namespaces: 32 00:15:11.777 Max Number of I/O Queues: 127 00:15:11.777 NVMe Specification Version (VS): 1.3 00:15:11.777 NVMe Specification Version (Identify): 1.3 00:15:11.777 Maximum Queue Entries: 256 00:15:11.777 Contiguous Queues Required: Yes 00:15:11.777 Arbitration Mechanisms Supported 00:15:11.777 Weighted Round Robin: Not Supported 00:15:11.777 Vendor Specific: Not Supported 00:15:11.777 Reset Timeout: 15000 ms 00:15:11.777 Doorbell Stride: 4 bytes 00:15:11.777 NVM Subsystem Reset: Not Supported 00:15:11.777 Command Sets Supported 00:15:11.777 NVM Command Set: Supported 00:15:11.777 Boot Partition: Not Supported 00:15:11.777 Memory Page Size Minimum: 4096 bytes 00:15:11.777 Memory Page Size Maximum: 4096 bytes 00:15:11.777 Persistent Memory Region: Not Supported 00:15:11.777 Optional Asynchronous Events Supported 00:15:11.777 Namespace Attribute Notices: Supported 00:15:11.777 Firmware Activation Notices: Not Supported 00:15:11.777 ANA Change Notices: Not Supported 00:15:11.777 PLE Aggregate Log Change Notices: Not Supported 00:15:11.777 LBA Status Info Alert Notices: Not Supported 00:15:11.777 EGE Aggregate Log Change Notices: Not Supported 00:15:11.777 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.777 Zone Descriptor Change Notices: Not Supported 00:15:11.777 Discovery Log Change Notices: Not Supported 00:15:11.777 Controller Attributes 00:15:11.777 128-bit Host Identifier: Supported 00:15:11.777 Non-Operational Permissive Mode: Not Supported 00:15:11.777 NVM Sets: Not Supported 00:15:11.777 Read Recovery Levels: Not Supported 00:15:11.777 Endurance Groups: Not Supported 00:15:11.777 Predictable Latency Mode: Not Supported 00:15:11.777 Traffic Based Keep ALive: Not Supported 00:15:11.777 Namespace Granularity: Not Supported 00:15:11.777 SQ Associations: Not Supported 00:15:11.777 UUID List: Not Supported 00:15:11.777 Multi-Domain Subsystem: Not Supported 00:15:11.777 Fixed Capacity Management: Not Supported 00:15:11.777 Variable Capacity Management: Not Supported 00:15:11.777 Delete Endurance Group: Not Supported 00:15:11.777 Delete NVM Set: Not Supported 00:15:11.777 Extended LBA Formats Supported: Not Supported 00:15:11.777 Flexible Data Placement Supported: Not Supported 00:15:11.777 00:15:11.777 Controller Memory Buffer Support 00:15:11.777 ================================ 00:15:11.777 Supported: No 00:15:11.777 00:15:11.777 Persistent Memory Region Support 00:15:11.777 ================================ 00:15:11.777 Supported: No 00:15:11.777 00:15:11.777 Admin Command Set Attributes 00:15:11.777 ============================ 00:15:11.777 Security Send/Receive: Not Supported 00:15:11.777 Format NVM: Not Supported 00:15:11.777 Firmware Activate/Download: Not Supported 00:15:11.777 Namespace Management: Not Supported 00:15:11.777 Device Self-Test: Not Supported 00:15:11.777 Directives: Not Supported 00:15:11.777 NVMe-MI: Not Supported 00:15:11.777 Virtualization Management: Not Supported 00:15:11.777 Doorbell Buffer Config: Not Supported 00:15:11.777 Get LBA Status Capability: Not Supported 00:15:11.777 Command & Feature Lockdown Capability: Not Supported 00:15:11.777 Abort Command Limit: 4 00:15:11.777 Async Event Request Limit: 4 00:15:11.777 Number of Firmware Slots: N/A 00:15:11.777 Firmware Slot 1 Read-Only: N/A 00:15:11.777 Firmware Activation Without Reset: N/A 00:15:11.777 Multiple Update Detection Support: N/A 00:15:11.777 Firmware Update Granularity: No Information Provided 00:15:11.777 Per-Namespace SMART Log: No 00:15:11.777 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.777 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:11.777 Command Effects Log Page: Supported 00:15:11.777 Get Log Page Extended Data: Supported 00:15:11.777 Telemetry Log Pages: Not Supported 00:15:11.777 Persistent Event Log Pages: Not Supported 00:15:11.777 Supported Log Pages Log Page: May Support 00:15:11.777 Commands Supported & Effects Log Page: Not Supported 00:15:11.777 Feature Identifiers & Effects Log Page:May Support 00:15:11.777 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.777 Data Area 4 for Telemetry Log: Not Supported 00:15:11.777 Error Log Page Entries Supported: 128 00:15:11.777 Keep Alive: Supported 00:15:11.777 Keep Alive Granularity: 10000 ms 00:15:11.777 00:15:11.777 NVM Command Set Attributes 00:15:11.777 ========================== 00:15:11.777 Submission Queue Entry Size 00:15:11.777 Max: 64 00:15:11.777 Min: 64 00:15:11.777 Completion Queue Entry Size 00:15:11.777 Max: 16 00:15:11.777 Min: 16 00:15:11.777 Number of Namespaces: 32 00:15:11.777 Compare Command: Supported 00:15:11.777 Write Uncorrectable Command: Not Supported 00:15:11.777 Dataset Management Command: Supported 00:15:11.777 Write Zeroes Command: Supported 00:15:11.777 Set Features Save Field: Not Supported 00:15:11.777 Reservations: Not Supported 00:15:11.777 Timestamp: Not Supported 00:15:11.777 Copy: Supported 00:15:11.777 Volatile Write Cache: Present 00:15:11.777 Atomic Write Unit (Normal): 1 00:15:11.777 Atomic Write Unit (PFail): 1 00:15:11.777 Atomic Compare & Write Unit: 1 00:15:11.777 Fused Compare & Write: Supported 00:15:11.777 Scatter-Gather List 00:15:11.777 SGL Command Set: Supported (Dword aligned) 00:15:11.777 SGL Keyed: Not Supported 00:15:11.777 SGL Bit Bucket Descriptor: Not Supported 00:15:11.777 SGL Metadata Pointer: Not Supported 00:15:11.777 Oversized SGL: Not Supported 00:15:11.777 SGL Metadata Address: Not Supported 00:15:11.777 SGL Offset: Not Supported 00:15:11.777 Transport SGL Data Block: Not Supported 00:15:11.777 Replay Protected Memory Block: Not Supported 00:15:11.777 00:15:11.777 Firmware Slot Information 00:15:11.777 ========================= 00:15:11.777 Active slot: 1 00:15:11.777 Slot 1 Firmware Revision: 24.05 00:15:11.777 00:15:11.777 00:15:11.777 Commands Supported and Effects 00:15:11.777 ============================== 00:15:11.777 Admin Commands 00:15:11.777 -------------- 00:15:11.777 Get Log Page (02h): Supported 00:15:11.777 Identify (06h): Supported 00:15:11.777 Abort (08h): Supported 00:15:11.777 Set Features (09h): Supported 00:15:11.777 Get Features (0Ah): Supported 00:15:11.777 Asynchronous Event Request (0Ch): Supported 00:15:11.777 Keep Alive (18h): Supported 00:15:11.777 I/O Commands 00:15:11.777 ------------ 00:15:11.777 Flush (00h): Supported LBA-Change 00:15:11.777 Write (01h): Supported LBA-Change 00:15:11.777 Read (02h): Supported 00:15:11.777 Compare (05h): Supported 00:15:11.777 Write Zeroes (08h): Supported LBA-Change 00:15:11.777 Dataset Management (09h): Supported LBA-Change 00:15:11.777 Copy (19h): Supported LBA-Change 00:15:11.777 Unknown (79h): Supported LBA-Change 00:15:11.777 Unknown (7Ah): Supported 00:15:11.777 00:15:11.777 Error Log 00:15:11.777 ========= 00:15:11.777 00:15:11.777 Arbitration 00:15:11.777 =========== 00:15:11.777 Arbitration Burst: 1 00:15:11.777 00:15:11.777 Power Management 00:15:11.777 ================ 00:15:11.777 Number of Power States: 1 00:15:11.777 Current Power State: Power State #0 00:15:11.777 Power State #0: 00:15:11.777 Max Power: 0.00 W 00:15:11.777 Non-Operational State: Operational 00:15:11.777 Entry Latency: Not Reported 00:15:11.777 Exit Latency: Not Reported 00:15:11.777 Relative Read Throughput: 0 00:15:11.777 Relative Read Latency: 0 00:15:11.777 Relative Write Throughput: 0 00:15:11.778 Relative Write Latency: 0 00:15:11.778 Idle Power: Not Reported 00:15:11.778 Active Power: Not Reported 00:15:11.778 Non-Operational Permissive Mode: Not Supported 00:15:11.778 00:15:11.778 Health Information 00:15:11.778 ================== 00:15:11.778 Critical Warnings: 00:15:11.778 Available Spare Space: OK 00:15:11.778 Temperature: OK 00:15:11.778 Device Reliability: OK 00:15:11.778 Read Only: No 00:15:11.778 Volatile Memory Backup: OK 00:15:11.778 Current Temperature: 0 Kelvin (-2[2024-05-13 02:56:02.484819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:11.778 [2024-05-13 02:56:02.484836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:11.778 [2024-05-13 02:56:02.484876] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:11.778 [2024-05-13 02:56:02.484894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.778 [2024-05-13 02:56:02.484904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.778 [2024-05-13 02:56:02.484914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.778 [2024-05-13 02:56:02.484924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.778 [2024-05-13 02:56:02.485433] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:11.778 [2024-05-13 02:56:02.485454] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:11.778 [2024-05-13 02:56:02.486427] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.778 [2024-05-13 02:56:02.486507] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:11.778 [2024-05-13 02:56:02.486521] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:11.778 [2024-05-13 02:56:02.487436] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:11.778 [2024-05-13 02:56:02.487457] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:11.778 [2024-05-13 02:56:02.487511] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:11.778 [2024-05-13 02:56:02.491706] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.778 73 Celsius) 00:15:11.778 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:11.778 Available Spare: 0% 00:15:11.778 Available Spare Threshold: 0% 00:15:11.778 Life Percentage Used: 0% 00:15:11.778 Data Units Read: 0 00:15:11.778 Data Units Written: 0 00:15:11.778 Host Read Commands: 0 00:15:11.778 Host Write Commands: 0 00:15:11.778 Controller Busy Time: 0 minutes 00:15:11.778 Power Cycles: 0 00:15:11.778 Power On Hours: 0 hours 00:15:11.778 Unsafe Shutdowns: 0 00:15:11.778 Unrecoverable Media Errors: 0 00:15:11.778 Lifetime Error Log Entries: 0 00:15:11.778 Warning Temperature Time: 0 minutes 00:15:11.778 Critical Temperature Time: 0 minutes 00:15:11.778 00:15:11.778 Number of Queues 00:15:11.778 ================ 00:15:11.778 Number of I/O Submission Queues: 127 00:15:11.778 Number of I/O Completion Queues: 127 00:15:11.778 00:15:11.778 Active Namespaces 00:15:11.778 ================= 00:15:11.778 Namespace ID:1 00:15:11.778 Error Recovery Timeout: Unlimited 00:15:11.778 Command Set Identifier: NVM (00h) 00:15:11.778 Deallocate: Supported 00:15:11.778 Deallocated/Unwritten Error: Not Supported 00:15:11.778 Deallocated Read Value: Unknown 00:15:11.778 Deallocate in Write Zeroes: Not Supported 00:15:11.778 Deallocated Guard Field: 0xFFFF 00:15:11.778 Flush: Supported 00:15:11.778 Reservation: Supported 00:15:11.778 Namespace Sharing Capabilities: Multiple Controllers 00:15:11.778 Size (in LBAs): 131072 (0GiB) 00:15:11.778 Capacity (in LBAs): 131072 (0GiB) 00:15:11.778 Utilization (in LBAs): 131072 (0GiB) 00:15:11.778 NGUID: 53B09E863CD746DEB197BB089D71F398 00:15:11.778 UUID: 53b09e86-3cd7-46de-b197-bb089d71f398 00:15:11.778 Thin Provisioning: Not Supported 00:15:11.778 Per-NS Atomic Units: Yes 00:15:11.778 Atomic Boundary Size (Normal): 0 00:15:11.778 Atomic Boundary Size (PFail): 0 00:15:11.778 Atomic Boundary Offset: 0 00:15:11.778 Maximum Single Source Range Length: 65535 00:15:11.778 Maximum Copy Length: 65535 00:15:11.778 Maximum Source Range Count: 1 00:15:11.778 NGUID/EUI64 Never Reused: No 00:15:11.778 Namespace Write Protected: No 00:15:11.778 Number of LBA Formats: 1 00:15:11.778 Current LBA Format: LBA Format #00 00:15:11.778 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.778 00:15:11.778 02:56:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:11.778 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.035 [2024-05-13 02:56:02.721555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.298 Initializing NVMe Controllers 00:15:17.298 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:17.298 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:17.298 Initialization complete. Launching workers. 00:15:17.298 ======================================================== 00:15:17.298 Latency(us) 00:15:17.298 Device Information : IOPS MiB/s Average min max 00:15:17.298 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35451.43 138.48 3610.59 1173.16 7299.00 00:15:17.298 ======================================================== 00:15:17.298 Total : 35451.43 138.48 3610.59 1173.16 7299.00 00:15:17.298 00:15:17.298 [2024-05-13 02:56:07.743843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.298 02:56:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:17.298 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.298 [2024-05-13 02:56:07.989073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.563 Initializing NVMe Controllers 00:15:22.563 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.563 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:22.563 Initialization complete. Launching workers. 00:15:22.563 ======================================================== 00:15:22.564 Latency(us) 00:15:22.564 Device Information : IOPS MiB/s Average min max 00:15:22.564 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.84 6821.51 11021.49 00:15:22.564 ======================================================== 00:15:22.564 Total : 16051.20 62.70 7982.84 6821.51 11021.49 00:15:22.564 00:15:22.564 [2024-05-13 02:56:13.028762] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.564 02:56:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:22.564 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.564 [2024-05-13 02:56:13.237828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.826 [2024-05-13 02:56:18.325178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.826 Initializing NVMe Controllers 00:15:27.826 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.826 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:27.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:27.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:27.826 Initialization complete. Launching workers. 00:15:27.826 Starting thread on core 2 00:15:27.826 Starting thread on core 3 00:15:27.826 Starting thread on core 1 00:15:27.826 02:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:27.826 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.826 [2024-05-13 02:56:18.625128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.107 [2024-05-13 02:56:21.695028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.107 Initializing NVMe Controllers 00:15:31.107 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.107 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.107 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:31.107 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:31.107 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:31.107 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:31.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:31.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:31.107 Initialization complete. Launching workers. 00:15:31.107 Starting thread on core 1 with urgent priority queue 00:15:31.107 Starting thread on core 2 with urgent priority queue 00:15:31.107 Starting thread on core 3 with urgent priority queue 00:15:31.107 Starting thread on core 0 with urgent priority queue 00:15:31.107 SPDK bdev Controller (SPDK1 ) core 0: 5998.00 IO/s 16.67 secs/100000 ios 00:15:31.107 SPDK bdev Controller (SPDK1 ) core 1: 5361.67 IO/s 18.65 secs/100000 ios 00:15:31.107 SPDK bdev Controller (SPDK1 ) core 2: 5697.00 IO/s 17.55 secs/100000 ios 00:15:31.107 SPDK bdev Controller (SPDK1 ) core 3: 6140.67 IO/s 16.28 secs/100000 ios 00:15:31.107 ======================================================== 00:15:31.107 00:15:31.107 02:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:31.107 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.433 [2024-05-13 02:56:22.000273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.433 Initializing NVMe Controllers 00:15:31.433 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.433 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.433 Namespace ID: 1 size: 0GB 00:15:31.433 Initialization complete. 00:15:31.433 INFO: using host memory buffer for IO 00:15:31.433 Hello world! 00:15:31.433 [2024-05-13 02:56:22.033854] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.433 02:56:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:31.433 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.690 [2024-05-13 02:56:22.319174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:32.622 Initializing NVMe Controllers 00:15:32.622 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:32.622 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:32.622 Initialization complete. Launching workers. 00:15:32.622 submit (in ns) avg, min, max = 8525.2, 3507.8, 7988992.2 00:15:32.622 complete (in ns) avg, min, max = 26317.7, 2055.6, 8004231.1 00:15:32.622 00:15:32.622 Submit histogram 00:15:32.622 ================ 00:15:32.622 Range in us Cumulative Count 00:15:32.622 3.484 - 3.508: 0.0072% ( 1) 00:15:32.622 3.508 - 3.532: 0.3088% ( 42) 00:15:32.622 3.532 - 3.556: 2.1758% ( 260) 00:15:32.622 3.556 - 3.579: 5.2348% ( 426) 00:15:32.622 3.579 - 3.603: 11.7622% ( 909) 00:15:32.622 3.603 - 3.627: 21.1260% ( 1304) 00:15:32.622 3.627 - 3.650: 30.3820% ( 1289) 00:15:32.622 3.650 - 3.674: 37.4192% ( 980) 00:15:32.622 3.674 - 3.698: 44.9806% ( 1053) 00:15:32.622 3.698 - 3.721: 52.7646% ( 1084) 00:15:32.622 3.721 - 3.745: 58.2077% ( 758) 00:15:32.622 3.745 - 3.769: 62.3582% ( 578) 00:15:32.622 3.769 - 3.793: 65.0294% ( 372) 00:15:32.622 3.793 - 3.816: 68.1675% ( 437) 00:15:32.622 3.816 - 3.840: 71.3916% ( 449) 00:15:32.622 3.840 - 3.864: 75.7073% ( 601) 00:15:32.622 3.864 - 3.887: 79.3623% ( 509) 00:15:32.622 3.887 - 3.911: 82.6296% ( 455) 00:15:32.622 3.911 - 3.935: 85.6240% ( 417) 00:15:32.622 3.935 - 3.959: 87.8213% ( 306) 00:15:32.622 3.959 - 3.982: 89.6309% ( 252) 00:15:32.622 3.982 - 4.006: 90.9809% ( 188) 00:15:32.622 4.006 - 4.030: 91.9862% ( 140) 00:15:32.622 4.030 - 4.053: 92.9843% ( 139) 00:15:32.622 4.053 - 4.077: 93.9179% ( 130) 00:15:32.622 4.077 - 4.101: 94.6431% ( 101) 00:15:32.622 4.101 - 4.124: 95.3540% ( 99) 00:15:32.622 4.124 - 4.148: 95.8926% ( 75) 00:15:32.622 4.148 - 4.172: 96.2947% ( 56) 00:15:32.622 4.172 - 4.196: 96.5101% ( 30) 00:15:32.622 4.196 - 4.219: 96.7615% ( 35) 00:15:32.622 4.219 - 4.243: 96.9194% ( 22) 00:15:32.622 4.243 - 4.267: 97.0702% ( 21) 00:15:32.622 4.267 - 4.290: 97.1995% ( 18) 00:15:32.622 4.290 - 4.314: 97.3072% ( 15) 00:15:32.622 4.314 - 4.338: 97.3790% ( 10) 00:15:32.622 4.338 - 4.361: 97.4724% ( 13) 00:15:32.622 4.361 - 4.385: 97.5513% ( 11) 00:15:32.622 4.385 - 4.409: 97.5872% ( 5) 00:15:32.622 4.409 - 4.433: 97.6088% ( 3) 00:15:32.622 4.433 - 4.456: 97.6232% ( 2) 00:15:32.622 4.456 - 4.480: 97.6447% ( 3) 00:15:32.622 4.480 - 4.504: 97.6591% ( 2) 00:15:32.622 4.504 - 4.527: 97.6878% ( 4) 00:15:32.622 4.551 - 4.575: 97.7021% ( 2) 00:15:32.622 4.599 - 4.622: 97.7165% ( 2) 00:15:32.622 4.622 - 4.646: 97.7309% ( 2) 00:15:32.622 4.646 - 4.670: 97.7524% ( 3) 00:15:32.622 4.670 - 4.693: 97.7811% ( 4) 00:15:32.622 4.693 - 4.717: 97.8027% ( 3) 00:15:32.622 4.717 - 4.741: 97.8386% ( 5) 00:15:32.622 4.741 - 4.764: 97.8817% ( 6) 00:15:32.622 4.764 - 4.788: 97.9176% ( 5) 00:15:32.622 4.788 - 4.812: 97.9391% ( 3) 00:15:32.622 4.812 - 4.836: 97.9894% ( 7) 00:15:32.622 4.836 - 4.859: 98.0181% ( 4) 00:15:32.622 4.859 - 4.883: 98.0468% ( 4) 00:15:32.622 4.883 - 4.907: 98.1114% ( 9) 00:15:32.622 4.907 - 4.930: 98.1545% ( 6) 00:15:32.622 4.930 - 4.954: 98.1761% ( 3) 00:15:32.622 4.954 - 4.978: 98.1904% ( 2) 00:15:32.622 4.978 - 5.001: 98.2192% ( 4) 00:15:32.622 5.001 - 5.025: 98.2263% ( 1) 00:15:32.622 5.025 - 5.049: 98.2407% ( 2) 00:15:32.622 5.049 - 5.073: 98.2694% ( 4) 00:15:32.622 5.073 - 5.096: 98.3053% ( 5) 00:15:32.622 5.096 - 5.120: 98.3197% ( 2) 00:15:32.622 5.120 - 5.144: 98.3484% ( 4) 00:15:32.622 5.144 - 5.167: 98.3556% ( 1) 00:15:32.622 5.167 - 5.191: 98.3628% ( 1) 00:15:32.622 5.191 - 5.215: 98.3843% ( 3) 00:15:32.622 5.215 - 5.239: 98.3987% ( 2) 00:15:32.622 5.239 - 5.262: 98.4059% ( 1) 00:15:32.622 5.381 - 5.404: 98.4130% ( 1) 00:15:32.622 5.499 - 5.523: 98.4202% ( 1) 00:15:32.622 5.736 - 5.760: 98.4274% ( 1) 00:15:32.622 5.784 - 5.807: 98.4346% ( 1) 00:15:32.622 6.068 - 6.116: 98.4418% ( 1) 00:15:32.622 6.163 - 6.210: 98.4489% ( 1) 00:15:32.622 6.210 - 6.258: 98.4561% ( 1) 00:15:32.622 6.353 - 6.400: 98.4705% ( 2) 00:15:32.622 6.542 - 6.590: 98.4848% ( 2) 00:15:32.622 6.684 - 6.732: 98.4920% ( 1) 00:15:32.622 6.779 - 6.827: 98.5064% ( 2) 00:15:32.622 6.827 - 6.874: 98.5136% ( 1) 00:15:32.622 6.874 - 6.921: 98.5279% ( 2) 00:15:32.622 6.921 - 6.969: 98.5351% ( 1) 00:15:32.622 6.969 - 7.016: 98.5423% ( 1) 00:15:32.622 7.016 - 7.064: 98.5495% ( 1) 00:15:32.622 7.111 - 7.159: 98.5567% ( 1) 00:15:32.622 7.206 - 7.253: 98.5710% ( 2) 00:15:32.622 7.253 - 7.301: 98.5854% ( 2) 00:15:32.622 7.443 - 7.490: 98.5926% ( 1) 00:15:32.622 7.490 - 7.538: 98.5997% ( 1) 00:15:32.622 7.538 - 7.585: 98.6213% ( 3) 00:15:32.622 7.585 - 7.633: 98.6285% ( 1) 00:15:32.623 7.680 - 7.727: 98.6428% ( 2) 00:15:32.623 7.727 - 7.775: 98.6500% ( 1) 00:15:32.623 7.822 - 7.870: 98.6572% ( 1) 00:15:32.623 7.870 - 7.917: 98.6715% ( 2) 00:15:32.623 7.917 - 7.964: 98.6787% ( 1) 00:15:32.623 7.964 - 8.012: 98.6859% ( 1) 00:15:32.623 8.249 - 8.296: 98.6931% ( 1) 00:15:32.623 8.296 - 8.344: 98.7003% ( 1) 00:15:32.623 8.533 - 8.581: 98.7075% ( 1) 00:15:32.623 8.628 - 8.676: 98.7146% ( 1) 00:15:32.623 8.723 - 8.770: 98.7218% ( 1) 00:15:32.623 8.960 - 9.007: 98.7290% ( 1) 00:15:32.623 9.102 - 9.150: 98.7362% ( 1) 00:15:32.623 9.150 - 9.197: 98.7434% ( 1) 00:15:32.623 9.339 - 9.387: 98.7505% ( 1) 00:15:32.623 9.387 - 9.434: 98.7577% ( 1) 00:15:32.623 9.813 - 9.861: 98.7649% ( 1) 00:15:32.623 9.956 - 10.003: 98.7721% ( 1) 00:15:32.623 10.240 - 10.287: 98.7793% ( 1) 00:15:32.623 10.287 - 10.335: 98.7864% ( 1) 00:15:32.623 10.714 - 10.761: 98.7936% ( 1) 00:15:32.623 10.761 - 10.809: 98.8008% ( 1) 00:15:32.623 11.046 - 11.093: 98.8080% ( 1) 00:15:32.623 11.093 - 11.141: 98.8223% ( 2) 00:15:32.623 11.757 - 11.804: 98.8295% ( 1) 00:15:32.623 11.852 - 11.899: 98.8367% ( 1) 00:15:32.623 11.899 - 11.947: 98.8439% ( 1) 00:15:32.623 12.421 - 12.516: 98.8511% ( 1) 00:15:32.623 12.516 - 12.610: 98.8654% ( 2) 00:15:32.623 12.610 - 12.705: 98.8726% ( 1) 00:15:32.623 12.705 - 12.800: 98.8798% ( 1) 00:15:32.623 12.800 - 12.895: 98.8870% ( 1) 00:15:32.623 12.895 - 12.990: 98.8942% ( 1) 00:15:32.623 13.179 - 13.274: 98.9013% ( 1) 00:15:32.623 13.274 - 13.369: 98.9157% ( 2) 00:15:32.623 13.653 - 13.748: 98.9229% ( 1) 00:15:32.623 13.748 - 13.843: 98.9301% ( 1) 00:15:32.623 14.127 - 14.222: 98.9372% ( 1) 00:15:32.623 14.222 - 14.317: 98.9516% ( 2) 00:15:32.623 14.317 - 14.412: 98.9588% ( 1) 00:15:32.623 14.696 - 14.791: 98.9660% ( 1) 00:15:32.623 14.886 - 14.981: 98.9731% ( 1) 00:15:32.623 16.498 - 16.593: 98.9803% ( 1) 00:15:32.623 17.067 - 17.161: 98.9875% ( 1) 00:15:32.623 17.161 - 17.256: 99.0090% ( 3) 00:15:32.623 17.256 - 17.351: 99.0162% ( 1) 00:15:32.623 17.446 - 17.541: 99.0665% ( 7) 00:15:32.623 17.541 - 17.636: 99.0737% ( 1) 00:15:32.623 17.636 - 17.730: 99.1024% ( 4) 00:15:32.623 17.730 - 17.825: 99.1598% ( 8) 00:15:32.623 17.825 - 17.920: 99.1670% ( 1) 00:15:32.623 17.920 - 18.015: 99.2029% ( 5) 00:15:32.623 18.015 - 18.110: 99.2317% ( 4) 00:15:32.623 18.110 - 18.204: 99.2891% ( 8) 00:15:32.623 18.204 - 18.299: 99.3537% ( 9) 00:15:32.623 18.299 - 18.394: 99.4327% ( 11) 00:15:32.623 18.394 - 18.489: 99.4973% ( 9) 00:15:32.623 18.489 - 18.584: 99.5404% ( 6) 00:15:32.623 18.584 - 18.679: 99.6051% ( 9) 00:15:32.623 18.679 - 18.773: 99.6625% ( 8) 00:15:32.623 18.773 - 18.868: 99.7128% ( 7) 00:15:32.623 18.868 - 18.963: 99.7343% ( 3) 00:15:32.623 18.963 - 19.058: 99.7559% ( 3) 00:15:32.623 19.058 - 19.153: 99.7846% ( 4) 00:15:32.623 19.153 - 19.247: 99.7918% ( 1) 00:15:32.623 19.247 - 19.342: 99.7989% ( 1) 00:15:32.623 19.342 - 19.437: 99.8205% ( 3) 00:15:32.623 19.437 - 19.532: 99.8348% ( 2) 00:15:32.623 19.532 - 19.627: 99.8420% ( 1) 00:15:32.623 19.721 - 19.816: 99.8492% ( 1) 00:15:32.623 24.273 - 24.462: 99.8564% ( 1) 00:15:32.623 24.652 - 24.841: 99.8636% ( 1) 00:15:32.623 28.065 - 28.255: 99.8707% ( 1) 00:15:32.623 28.255 - 28.444: 99.8779% ( 1) 00:15:32.623 28.634 - 28.824: 99.8851% ( 1) 00:15:32.623 29.203 - 29.393: 99.8923% ( 1) 00:15:32.623 3980.705 - 4004.978: 99.9641% ( 10) 00:15:32.623 4004.978 - 4029.250: 99.9928% ( 4) 00:15:32.623 7961.410 - 8009.956: 100.0000% ( 1) 00:15:32.623 00:15:32.623 Complete histogram 00:15:32.623 ================== 00:15:32.623 Range in us Cumulative Count 00:15:32.623 2.050 - 2.062: 0.6319% ( 88) 00:15:32.623 2.062 - 2.074: 17.9161% ( 2407) 00:15:32.623 2.074 - 2.086: 22.2174% ( 599) 00:15:32.623 2.086 - 2.098: 32.4932% ( 1431) 00:15:32.623 2.098 - 2.110: 56.9438% ( 3405) 00:15:32.623 2.110 - 2.121: 60.0029% ( 426) 00:15:32.623 2.121 - 2.133: 63.6866% ( 513) 00:15:32.623 2.133 - 2.145: 69.0004% ( 740) 00:15:32.623 2.145 - 2.157: 69.7975% ( 111) 00:15:32.623 2.157 - 2.169: 75.8725% ( 846) 00:15:32.623 2.169 - 2.181: 82.0408% ( 859) 00:15:32.623 2.181 - 2.193: 83.7355% ( 236) 00:15:32.623 2.193 - 2.204: 85.3224% ( 221) 00:15:32.623 2.204 - 2.216: 87.9865% ( 371) 00:15:32.623 2.216 - 2.228: 89.0134% ( 143) 00:15:32.623 2.228 - 2.240: 90.7870% ( 247) 00:15:32.623 2.240 - 2.252: 93.7455% ( 412) 00:15:32.623 2.252 - 2.264: 94.5426% ( 111) 00:15:32.623 2.264 - 2.276: 94.9375% ( 55) 00:15:32.623 2.276 - 2.287: 95.4330% ( 69) 00:15:32.623 2.287 - 2.299: 95.7633% ( 46) 00:15:32.623 2.299 - 2.311: 95.8279% ( 9) 00:15:32.623 2.311 - 2.323: 95.9572% ( 18) 00:15:32.623 2.323 - 2.335: 96.1439% ( 26) 00:15:32.623 2.335 - 2.347: 96.3306% ( 26) 00:15:32.623 2.347 - 2.359: 96.6035% ( 38) 00:15:32.623 2.359 - 2.370: 96.9625% ( 50) 00:15:32.623 2.370 - 2.382: 97.3862% ( 59) 00:15:32.623 2.382 - 2.394: 97.6232% ( 33) 00:15:32.623 2.394 - 2.406: 97.8386% ( 30) 00:15:32.623 2.406 - 2.418: 97.9391% ( 14) 00:15:32.623 2.418 - 2.430: 98.1186% ( 25) 00:15:32.623 2.430 - 2.441: 98.2551% ( 19) 00:15:32.623 2.441 - 2.453: 98.3556% ( 14) 00:15:32.623 2.453 - 2.465: 98.4059% ( 7) 00:15:32.623 2.465 - 2.477: 98.4274% ( 3) 00:15:32.623 2.477 - 2.489: 98.4777% ( 7) 00:15:32.623 2.489 - 2.501: 98.4920% ( 2) 00:15:32.623 2.501 - 2.513: 98.5423% ( 7) 00:15:32.623 2.513 - 2.524: 98.5710% ( 4) 00:15:32.623 2.548 - 2.560: 98.5854% ( 2) 00:15:32.623 2.679 - 2.690: 98.5926% ( 1) 00:15:32.623 2.702 - 2.714: 98.5997% ( 1) 00:15:32.623 2.726 - 2.738: 98.6069% ( 1) 00:15:32.623 2.738 - 2.750: 98.6141% ( 1) 00:15:32.623 2.880 - 2.892: 98.6213% ( 1) 00:15:32.623 3.247 - 3.271: 98.6285% ( 1) 00:15:32.623 3.295 - 3.319: 98.6500% ( 3) 00:15:32.623 3.319 - 3.342: 98.6572% ( 1) 00:15:32.623 3.342 - 3.366: 98.6644% ( 1) 00:15:32.623 3.366 - 3.390: 98.6715% ( 1) 00:15:32.623 3.390 - 3.413: 98.6787% ( 1) 00:15:32.623 3.508 - 3.532: 98.7003% ( 3) 00:15:32.623 3.532 - 3.556: 98.7075% ( 1) 00:15:32.623 3.603 - 3.627: 98.7146% ( 1) 00:15:32.623 3.650 - 3.674: 98.7218% ( 1) 00:15:32.623 3.959 - 3.982: 98.7290% ( 1) 00:15:32.623 4.006 - 4.030: 98.7362% ( 1) 00:15:32.623 4.030 - 4.053: 98.7434% ( 1) 00:15:32.623 4.267 - 4.290: 98.7505% ( 1) 00:15:32.623 4.551 - 4.575: 98.7577% ( 1) 00:15:32.623 4.599 - 4.622: 98.7649% ( 1) 00:15:32.623 4.954 - 4.978: 98.7721% ( 1) 00:15:32.623 5.239 - 5.262: 98.7864% ( 2) 00:15:32.623 5.333 - 5.357: 98.7936% ( 1) 00:15:32.623 5.428 - 5.452: 98.8008% ( 1) 00:15:32.623 5.547 - 5.570: 98.8152% ( 2) 00:15:32.623 5.570 - 5.594: 98.8223% ( 1) 00:15:32.623 5.618 - 5.641: 98.8295% ( 1) 00:15:32.623 5.665 - 5.689: 98.8367% ( 1) 00:15:32.623 5.902 - 5.926: 98.8439% ( 1) 00:15:32.623 6.258 - 6.305: 98.8511% ( 1) 00:15:32.623 6.542 - 6.590: 98.8583% ( 1) 00:15:32.623 6.637 - 6.684: 98.8654% ( 1) 00:15:32.623 7.253 - 7.301: 98.8726% ( 1) 00:15:32.623 8.296 - 8.344: 98.8798% ( 1) 00:15:32.623 8.770 - 8.818: 9[2024-05-13 02:56:23.342357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:32.623 8.8870% ( 1) 00:15:32.623 10.761 - 10.809: 98.8942% ( 1) 00:15:32.623 15.644 - 15.739: 98.9013% ( 1) 00:15:32.623 15.739 - 15.834: 98.9229% ( 3) 00:15:32.623 15.929 - 16.024: 98.9516% ( 4) 00:15:32.623 16.024 - 16.119: 98.9660% ( 2) 00:15:32.623 16.119 - 16.213: 98.9803% ( 2) 00:15:32.623 16.213 - 16.308: 99.0019% ( 3) 00:15:32.623 16.308 - 16.403: 99.0378% ( 5) 00:15:32.623 16.403 - 16.498: 99.1096% ( 10) 00:15:32.623 16.498 - 16.593: 99.2029% ( 13) 00:15:32.623 16.593 - 16.687: 99.2173% ( 2) 00:15:32.623 16.687 - 16.782: 99.2317% ( 2) 00:15:32.623 16.782 - 16.877: 99.2388% ( 1) 00:15:32.623 16.877 - 16.972: 99.2604% ( 3) 00:15:32.623 16.972 - 17.067: 99.2891% ( 4) 00:15:32.623 17.067 - 17.161: 99.3106% ( 3) 00:15:32.623 17.161 - 17.256: 99.3322% ( 3) 00:15:32.623 17.351 - 17.446: 99.3394% ( 1) 00:15:32.623 17.446 - 17.541: 99.3465% ( 1) 00:15:32.623 17.636 - 17.730: 99.3537% ( 1) 00:15:32.623 17.920 - 18.015: 99.3609% ( 1) 00:15:32.623 18.015 - 18.110: 99.3681% ( 1) 00:15:32.623 18.110 - 18.204: 99.3753% ( 1) 00:15:32.623 18.204 - 18.299: 99.3825% ( 1) 00:15:32.623 21.523 - 21.618: 99.3896% ( 1) 00:15:32.623 37.736 - 37.926: 99.3968% ( 1) 00:15:32.623 153.979 - 154.738: 99.4040% ( 1) 00:15:32.623 3009.801 - 3021.938: 99.4112% ( 1) 00:15:32.623 3980.705 - 4004.978: 99.8564% ( 62) 00:15:32.623 4004.978 - 4029.250: 99.9856% ( 18) 00:15:32.623 4490.430 - 4514.702: 99.9928% ( 1) 00:15:32.624 7961.410 - 8009.956: 100.0000% ( 1) 00:15:32.624 00:15:32.624 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:32.624 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:32.624 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:32.624 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:32.624 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:32.881 [ 00:15:32.881 { 00:15:32.881 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:32.881 "subtype": "Discovery", 00:15:32.881 "listen_addresses": [], 00:15:32.881 "allow_any_host": true, 00:15:32.881 "hosts": [] 00:15:32.881 }, 00:15:32.881 { 00:15:32.881 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:32.881 "subtype": "NVMe", 00:15:32.881 "listen_addresses": [ 00:15:32.881 { 00:15:32.881 "trtype": "VFIOUSER", 00:15:32.881 "adrfam": "IPv4", 00:15:32.881 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:32.881 "trsvcid": "0" 00:15:32.881 } 00:15:32.881 ], 00:15:32.881 "allow_any_host": true, 00:15:32.881 "hosts": [], 00:15:32.881 "serial_number": "SPDK1", 00:15:32.881 "model_number": "SPDK bdev Controller", 00:15:32.881 "max_namespaces": 32, 00:15:32.881 "min_cntlid": 1, 00:15:32.881 "max_cntlid": 65519, 00:15:32.881 "namespaces": [ 00:15:32.881 { 00:15:32.881 "nsid": 1, 00:15:32.881 "bdev_name": "Malloc1", 00:15:32.881 "name": "Malloc1", 00:15:32.881 "nguid": "53B09E863CD746DEB197BB089D71F398", 00:15:32.881 "uuid": "53b09e86-3cd7-46de-b197-bb089d71f398" 00:15:32.881 } 00:15:32.881 ] 00:15:32.881 }, 00:15:32.881 { 00:15:32.881 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:32.881 "subtype": "NVMe", 00:15:32.881 "listen_addresses": [ 00:15:32.881 { 00:15:32.881 "trtype": "VFIOUSER", 00:15:32.881 "adrfam": "IPv4", 00:15:32.881 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:32.881 "trsvcid": "0" 00:15:32.881 } 00:15:32.881 ], 00:15:32.881 "allow_any_host": true, 00:15:32.881 "hosts": [], 00:15:32.881 "serial_number": "SPDK2", 00:15:32.881 "model_number": "SPDK bdev Controller", 00:15:32.881 "max_namespaces": 32, 00:15:32.881 "min_cntlid": 1, 00:15:32.881 "max_cntlid": 65519, 00:15:32.881 "namespaces": [ 00:15:32.881 { 00:15:32.881 "nsid": 1, 00:15:32.881 "bdev_name": "Malloc2", 00:15:32.881 "name": "Malloc2", 00:15:32.881 "nguid": "7ED34959068942B1AFFA52EFDD524182", 00:15:32.881 "uuid": "7ed34959-0689-42b1-affa-52efdd524182" 00:15:32.881 } 00:15:32.881 ] 00:15:32.881 } 00:15:32.881 ] 00:15:32.881 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:33.138 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=319042 00:15:33.138 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:33.138 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:33.138 02:56:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:33.138 02:56:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.138 02:56:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.138 02:56:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:33.138 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:33.138 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:33.138 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.138 [2024-05-13 02:56:23.838176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.395 Malloc3 00:15:33.395 02:56:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:33.652 [2024-05-13 02:56:24.240177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.652 02:56:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.652 Asynchronous Event Request test 00:15:33.652 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.652 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.652 Registering asynchronous event callbacks... 00:15:33.652 Starting namespace attribute notice tests for all controllers... 00:15:33.652 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:33.652 aer_cb - Changed Namespace 00:15:33.652 Cleaning up... 00:15:33.912 [ 00:15:33.912 { 00:15:33.912 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.912 "subtype": "Discovery", 00:15:33.912 "listen_addresses": [], 00:15:33.912 "allow_any_host": true, 00:15:33.912 "hosts": [] 00:15:33.912 }, 00:15:33.912 { 00:15:33.912 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.912 "subtype": "NVMe", 00:15:33.912 "listen_addresses": [ 00:15:33.912 { 00:15:33.912 "trtype": "VFIOUSER", 00:15:33.912 "adrfam": "IPv4", 00:15:33.912 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.912 "trsvcid": "0" 00:15:33.912 } 00:15:33.912 ], 00:15:33.912 "allow_any_host": true, 00:15:33.912 "hosts": [], 00:15:33.912 "serial_number": "SPDK1", 00:15:33.912 "model_number": "SPDK bdev Controller", 00:15:33.912 "max_namespaces": 32, 00:15:33.912 "min_cntlid": 1, 00:15:33.912 "max_cntlid": 65519, 00:15:33.912 "namespaces": [ 00:15:33.912 { 00:15:33.912 "nsid": 1, 00:15:33.912 "bdev_name": "Malloc1", 00:15:33.912 "name": "Malloc1", 00:15:33.912 "nguid": "53B09E863CD746DEB197BB089D71F398", 00:15:33.912 "uuid": "53b09e86-3cd7-46de-b197-bb089d71f398" 00:15:33.912 }, 00:15:33.912 { 00:15:33.912 "nsid": 2, 00:15:33.912 "bdev_name": "Malloc3", 00:15:33.912 "name": "Malloc3", 00:15:33.912 "nguid": "95E7A8EC1D3644BBAD36AA1F7BB8E721", 00:15:33.912 "uuid": "95e7a8ec-1d36-44bb-ad36-aa1f7bb8e721" 00:15:33.912 } 00:15:33.912 ] 00:15:33.912 }, 00:15:33.912 { 00:15:33.912 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.912 "subtype": "NVMe", 00:15:33.912 "listen_addresses": [ 00:15:33.912 { 00:15:33.912 "trtype": "VFIOUSER", 00:15:33.912 "adrfam": "IPv4", 00:15:33.912 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.912 "trsvcid": "0" 00:15:33.912 } 00:15:33.912 ], 00:15:33.912 "allow_any_host": true, 00:15:33.912 "hosts": [], 00:15:33.912 "serial_number": "SPDK2", 00:15:33.912 "model_number": "SPDK bdev Controller", 00:15:33.912 "max_namespaces": 32, 00:15:33.912 "min_cntlid": 1, 00:15:33.912 "max_cntlid": 65519, 00:15:33.912 "namespaces": [ 00:15:33.912 { 00:15:33.912 "nsid": 1, 00:15:33.912 "bdev_name": "Malloc2", 00:15:33.912 "name": "Malloc2", 00:15:33.912 "nguid": "7ED34959068942B1AFFA52EFDD524182", 00:15:33.912 "uuid": "7ed34959-0689-42b1-affa-52efdd524182" 00:15:33.912 } 00:15:33.912 ] 00:15:33.912 } 00:15:33.912 ] 00:15:33.912 02:56:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 319042 00:15:33.912 02:56:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:33.912 02:56:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:33.912 02:56:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:33.912 02:56:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:33.912 [2024-05-13 02:56:24.528212] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:15:33.912 [2024-05-13 02:56:24.528266] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319128 ] 00:15:33.912 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.912 [2024-05-13 02:56:24.545189] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:33.912 [2024-05-13 02:56:24.562815] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:33.912 [2024-05-13 02:56:24.568141] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:33.912 [2024-05-13 02:56:24.568168] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3ae7dc9000 00:15:33.912 [2024-05-13 02:56:24.569144] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.912 [2024-05-13 02:56:24.570152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.912 [2024-05-13 02:56:24.571161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.912 [2024-05-13 02:56:24.572164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:33.912 [2024-05-13 02:56:24.573180] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:33.912 [2024-05-13 02:56:24.574184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.912 [2024-05-13 02:56:24.575189] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:33.912 [2024-05-13 02:56:24.576196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.912 [2024-05-13 02:56:24.577206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:33.912 [2024-05-13 02:56:24.577228] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3ae6b7a000 00:15:33.912 [2024-05-13 02:56:24.578379] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:33.912 [2024-05-13 02:56:24.593170] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:33.912 [2024-05-13 02:56:24.593204] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:33.912 [2024-05-13 02:56:24.595295] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:33.912 [2024-05-13 02:56:24.595346] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:33.912 [2024-05-13 02:56:24.595429] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:33.912 [2024-05-13 02:56:24.595454] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:33.912 [2024-05-13 02:56:24.595463] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:33.912 [2024-05-13 02:56:24.596297] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:33.912 [2024-05-13 02:56:24.596317] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:33.912 [2024-05-13 02:56:24.596329] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:33.912 [2024-05-13 02:56:24.597301] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:33.912 [2024-05-13 02:56:24.597321] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:33.912 [2024-05-13 02:56:24.597335] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:33.912 [2024-05-13 02:56:24.598305] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:33.912 [2024-05-13 02:56:24.598325] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:33.912 [2024-05-13 02:56:24.599310] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:33.913 [2024-05-13 02:56:24.599329] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:33.913 [2024-05-13 02:56:24.599338] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:33.913 [2024-05-13 02:56:24.599350] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:33.913 [2024-05-13 02:56:24.599459] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:33.913 [2024-05-13 02:56:24.599467] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:33.913 [2024-05-13 02:56:24.599475] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:33.913 [2024-05-13 02:56:24.600319] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:33.913 [2024-05-13 02:56:24.601327] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:33.913 [2024-05-13 02:56:24.602334] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:33.913 [2024-05-13 02:56:24.603325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.913 [2024-05-13 02:56:24.603399] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:33.913 [2024-05-13 02:56:24.604349] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:33.913 [2024-05-13 02:56:24.604368] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:33.913 [2024-05-13 02:56:24.604377] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.604400] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:33.913 [2024-05-13 02:56:24.604416] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.604435] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:33.913 [2024-05-13 02:56:24.604444] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.913 [2024-05-13 02:56:24.604461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.610711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.610732] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:33.913 [2024-05-13 02:56:24.610741] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:33.913 [2024-05-13 02:56:24.610749] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:33.913 [2024-05-13 02:56:24.610757] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:33.913 [2024-05-13 02:56:24.610770] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:33.913 [2024-05-13 02:56:24.610778] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:33.913 [2024-05-13 02:56:24.610787] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.610799] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.610819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.618709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.618732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.913 [2024-05-13 02:56:24.618745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.913 [2024-05-13 02:56:24.618757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.913 [2024-05-13 02:56:24.618778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.913 [2024-05-13 02:56:24.618787] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.618803] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.618818] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.626708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.626726] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:33.913 [2024-05-13 02:56:24.626736] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.626752] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.626763] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.626778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.634722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.634784] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.634799] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.634812] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:33.913 [2024-05-13 02:56:24.634821] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:33.913 [2024-05-13 02:56:24.634831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.642706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.642735] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:33.913 [2024-05-13 02:56:24.642752] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.642767] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.642780] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:33.913 [2024-05-13 02:56:24.642788] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.913 [2024-05-13 02:56:24.642798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.650706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.650733] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.650749] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.650762] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:33.913 [2024-05-13 02:56:24.650771] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.913 [2024-05-13 02:56:24.650781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.658720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.658741] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.658755] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.658770] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.658781] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.658790] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.658798] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:33.913 [2024-05-13 02:56:24.658806] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:33.913 [2024-05-13 02:56:24.658814] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:33.913 [2024-05-13 02:56:24.658845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.664722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.664749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.674722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.674746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.682709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:33.913 [2024-05-13 02:56:24.682748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:33.913 [2024-05-13 02:56:24.690708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:33.914 [2024-05-13 02:56:24.690733] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:33.914 [2024-05-13 02:56:24.690743] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:33.914 [2024-05-13 02:56:24.690750] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:33.914 [2024-05-13 02:56:24.690756] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:33.914 [2024-05-13 02:56:24.690766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:33.914 [2024-05-13 02:56:24.690778] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:33.914 [2024-05-13 02:56:24.690786] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:33.914 [2024-05-13 02:56:24.690795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:33.914 [2024-05-13 02:56:24.690805] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:33.914 [2024-05-13 02:56:24.690813] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.914 [2024-05-13 02:56:24.690821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.914 [2024-05-13 02:56:24.690833] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:33.914 [2024-05-13 02:56:24.690841] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:33.914 [2024-05-13 02:56:24.690850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:33.914 [2024-05-13 02:56:24.698707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:33.914 [2024-05-13 02:56:24.698734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:33.914 [2024-05-13 02:56:24.698750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:33.914 [2024-05-13 02:56:24.698765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:33.914 ===================================================== 00:15:33.914 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:33.914 ===================================================== 00:15:33.914 Controller Capabilities/Features 00:15:33.914 ================================ 00:15:33.914 Vendor ID: 4e58 00:15:33.914 Subsystem Vendor ID: 4e58 00:15:33.914 Serial Number: SPDK2 00:15:33.914 Model Number: SPDK bdev Controller 00:15:33.914 Firmware Version: 24.05 00:15:33.914 Recommended Arb Burst: 6 00:15:33.914 IEEE OUI Identifier: 8d 6b 50 00:15:33.914 Multi-path I/O 00:15:33.914 May have multiple subsystem ports: Yes 00:15:33.914 May have multiple controllers: Yes 00:15:33.914 Associated with SR-IOV VF: No 00:15:33.914 Max Data Transfer Size: 131072 00:15:33.914 Max Number of Namespaces: 32 00:15:33.914 Max Number of I/O Queues: 127 00:15:33.914 NVMe Specification Version (VS): 1.3 00:15:33.914 NVMe Specification Version (Identify): 1.3 00:15:33.914 Maximum Queue Entries: 256 00:15:33.914 Contiguous Queues Required: Yes 00:15:33.914 Arbitration Mechanisms Supported 00:15:33.914 Weighted Round Robin: Not Supported 00:15:33.914 Vendor Specific: Not Supported 00:15:33.914 Reset Timeout: 15000 ms 00:15:33.914 Doorbell Stride: 4 bytes 00:15:33.914 NVM Subsystem Reset: Not Supported 00:15:33.914 Command Sets Supported 00:15:33.914 NVM Command Set: Supported 00:15:33.914 Boot Partition: Not Supported 00:15:33.914 Memory Page Size Minimum: 4096 bytes 00:15:33.914 Memory Page Size Maximum: 4096 bytes 00:15:33.914 Persistent Memory Region: Not Supported 00:15:33.914 Optional Asynchronous Events Supported 00:15:33.914 Namespace Attribute Notices: Supported 00:15:33.914 Firmware Activation Notices: Not Supported 00:15:33.914 ANA Change Notices: Not Supported 00:15:33.914 PLE Aggregate Log Change Notices: Not Supported 00:15:33.914 LBA Status Info Alert Notices: Not Supported 00:15:33.914 EGE Aggregate Log Change Notices: Not Supported 00:15:33.914 Normal NVM Subsystem Shutdown event: Not Supported 00:15:33.914 Zone Descriptor Change Notices: Not Supported 00:15:33.914 Discovery Log Change Notices: Not Supported 00:15:33.914 Controller Attributes 00:15:33.914 128-bit Host Identifier: Supported 00:15:33.914 Non-Operational Permissive Mode: Not Supported 00:15:33.914 NVM Sets: Not Supported 00:15:33.914 Read Recovery Levels: Not Supported 00:15:33.914 Endurance Groups: Not Supported 00:15:33.914 Predictable Latency Mode: Not Supported 00:15:33.914 Traffic Based Keep ALive: Not Supported 00:15:33.914 Namespace Granularity: Not Supported 00:15:33.914 SQ Associations: Not Supported 00:15:33.914 UUID List: Not Supported 00:15:33.914 Multi-Domain Subsystem: Not Supported 00:15:33.914 Fixed Capacity Management: Not Supported 00:15:33.914 Variable Capacity Management: Not Supported 00:15:33.914 Delete Endurance Group: Not Supported 00:15:33.914 Delete NVM Set: Not Supported 00:15:33.914 Extended LBA Formats Supported: Not Supported 00:15:33.914 Flexible Data Placement Supported: Not Supported 00:15:33.914 00:15:33.914 Controller Memory Buffer Support 00:15:33.914 ================================ 00:15:33.914 Supported: No 00:15:33.914 00:15:33.914 Persistent Memory Region Support 00:15:33.914 ================================ 00:15:33.914 Supported: No 00:15:33.914 00:15:33.914 Admin Command Set Attributes 00:15:33.914 ============================ 00:15:33.914 Security Send/Receive: Not Supported 00:15:33.914 Format NVM: Not Supported 00:15:33.914 Firmware Activate/Download: Not Supported 00:15:33.914 Namespace Management: Not Supported 00:15:33.914 Device Self-Test: Not Supported 00:15:33.914 Directives: Not Supported 00:15:33.914 NVMe-MI: Not Supported 00:15:33.914 Virtualization Management: Not Supported 00:15:33.914 Doorbell Buffer Config: Not Supported 00:15:33.914 Get LBA Status Capability: Not Supported 00:15:33.914 Command & Feature Lockdown Capability: Not Supported 00:15:33.914 Abort Command Limit: 4 00:15:33.914 Async Event Request Limit: 4 00:15:33.914 Number of Firmware Slots: N/A 00:15:33.914 Firmware Slot 1 Read-Only: N/A 00:15:33.914 Firmware Activation Without Reset: N/A 00:15:33.914 Multiple Update Detection Support: N/A 00:15:33.914 Firmware Update Granularity: No Information Provided 00:15:33.914 Per-Namespace SMART Log: No 00:15:33.914 Asymmetric Namespace Access Log Page: Not Supported 00:15:33.914 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:33.914 Command Effects Log Page: Supported 00:15:33.914 Get Log Page Extended Data: Supported 00:15:33.914 Telemetry Log Pages: Not Supported 00:15:33.914 Persistent Event Log Pages: Not Supported 00:15:33.914 Supported Log Pages Log Page: May Support 00:15:33.914 Commands Supported & Effects Log Page: Not Supported 00:15:33.914 Feature Identifiers & Effects Log Page:May Support 00:15:33.914 NVMe-MI Commands & Effects Log Page: May Support 00:15:33.914 Data Area 4 for Telemetry Log: Not Supported 00:15:33.914 Error Log Page Entries Supported: 128 00:15:33.914 Keep Alive: Supported 00:15:33.914 Keep Alive Granularity: 10000 ms 00:15:33.914 00:15:33.914 NVM Command Set Attributes 00:15:33.914 ========================== 00:15:33.914 Submission Queue Entry Size 00:15:33.914 Max: 64 00:15:33.914 Min: 64 00:15:33.914 Completion Queue Entry Size 00:15:33.914 Max: 16 00:15:33.914 Min: 16 00:15:33.914 Number of Namespaces: 32 00:15:33.914 Compare Command: Supported 00:15:33.914 Write Uncorrectable Command: Not Supported 00:15:33.914 Dataset Management Command: Supported 00:15:33.914 Write Zeroes Command: Supported 00:15:33.914 Set Features Save Field: Not Supported 00:15:33.914 Reservations: Not Supported 00:15:33.914 Timestamp: Not Supported 00:15:33.914 Copy: Supported 00:15:33.914 Volatile Write Cache: Present 00:15:33.914 Atomic Write Unit (Normal): 1 00:15:33.914 Atomic Write Unit (PFail): 1 00:15:33.914 Atomic Compare & Write Unit: 1 00:15:33.914 Fused Compare & Write: Supported 00:15:33.914 Scatter-Gather List 00:15:33.914 SGL Command Set: Supported (Dword aligned) 00:15:33.914 SGL Keyed: Not Supported 00:15:33.914 SGL Bit Bucket Descriptor: Not Supported 00:15:33.914 SGL Metadata Pointer: Not Supported 00:15:33.914 Oversized SGL: Not Supported 00:15:33.914 SGL Metadata Address: Not Supported 00:15:33.914 SGL Offset: Not Supported 00:15:33.914 Transport SGL Data Block: Not Supported 00:15:33.914 Replay Protected Memory Block: Not Supported 00:15:33.914 00:15:33.914 Firmware Slot Information 00:15:33.914 ========================= 00:15:33.914 Active slot: 1 00:15:33.914 Slot 1 Firmware Revision: 24.05 00:15:33.914 00:15:33.914 00:15:33.914 Commands Supported and Effects 00:15:33.914 ============================== 00:15:33.914 Admin Commands 00:15:33.914 -------------- 00:15:33.914 Get Log Page (02h): Supported 00:15:33.914 Identify (06h): Supported 00:15:33.914 Abort (08h): Supported 00:15:33.914 Set Features (09h): Supported 00:15:33.915 Get Features (0Ah): Supported 00:15:33.915 Asynchronous Event Request (0Ch): Supported 00:15:33.915 Keep Alive (18h): Supported 00:15:33.915 I/O Commands 00:15:33.915 ------------ 00:15:33.915 Flush (00h): Supported LBA-Change 00:15:33.915 Write (01h): Supported LBA-Change 00:15:33.915 Read (02h): Supported 00:15:33.915 Compare (05h): Supported 00:15:33.915 Write Zeroes (08h): Supported LBA-Change 00:15:33.915 Dataset Management (09h): Supported LBA-Change 00:15:33.915 Copy (19h): Supported LBA-Change 00:15:33.915 Unknown (79h): Supported LBA-Change 00:15:33.915 Unknown (7Ah): Supported 00:15:33.915 00:15:33.915 Error Log 00:15:33.915 ========= 00:15:33.915 00:15:33.915 Arbitration 00:15:33.915 =========== 00:15:33.915 Arbitration Burst: 1 00:15:33.915 00:15:33.915 Power Management 00:15:33.915 ================ 00:15:33.915 Number of Power States: 1 00:15:33.915 Current Power State: Power State #0 00:15:33.915 Power State #0: 00:15:33.915 Max Power: 0.00 W 00:15:33.915 Non-Operational State: Operational 00:15:33.915 Entry Latency: Not Reported 00:15:33.915 Exit Latency: Not Reported 00:15:33.915 Relative Read Throughput: 0 00:15:33.915 Relative Read Latency: 0 00:15:33.915 Relative Write Throughput: 0 00:15:33.915 Relative Write Latency: 0 00:15:33.915 Idle Power: Not Reported 00:15:33.915 Active Power: Not Reported 00:15:33.915 Non-Operational Permissive Mode: Not Supported 00:15:33.915 00:15:33.915 Health Information 00:15:33.915 ================== 00:15:33.915 Critical Warnings: 00:15:33.915 Available Spare Space: OK 00:15:33.915 Temperature: OK 00:15:33.915 Device Reliability: OK 00:15:33.915 Read Only: No 00:15:33.915 Volatile Memory Backup: OK 00:15:33.915 Current Temperature: 0 Kelvin (-2[2024-05-13 02:56:24.698883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:33.915 [2024-05-13 02:56:24.706704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:33.915 [2024-05-13 02:56:24.706753] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:33.915 [2024-05-13 02:56:24.706771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.915 [2024-05-13 02:56:24.706781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.915 [2024-05-13 02:56:24.706797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.915 [2024-05-13 02:56:24.706808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.915 [2024-05-13 02:56:24.706886] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:33.915 [2024-05-13 02:56:24.706907] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:33.915 [2024-05-13 02:56:24.707884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.915 [2024-05-13 02:56:24.707959] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:33.915 [2024-05-13 02:56:24.707974] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:33.915 [2024-05-13 02:56:24.708893] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:33.915 [2024-05-13 02:56:24.708917] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:33.915 [2024-05-13 02:56:24.708970] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:33.915 [2024-05-13 02:56:24.711708] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:34.172 73 Celsius) 00:15:34.172 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:34.172 Available Spare: 0% 00:15:34.172 Available Spare Threshold: 0% 00:15:34.172 Life Percentage Used: 0% 00:15:34.172 Data Units Read: 0 00:15:34.172 Data Units Written: 0 00:15:34.172 Host Read Commands: 0 00:15:34.172 Host Write Commands: 0 00:15:34.172 Controller Busy Time: 0 minutes 00:15:34.172 Power Cycles: 0 00:15:34.172 Power On Hours: 0 hours 00:15:34.172 Unsafe Shutdowns: 0 00:15:34.172 Unrecoverable Media Errors: 0 00:15:34.172 Lifetime Error Log Entries: 0 00:15:34.172 Warning Temperature Time: 0 minutes 00:15:34.172 Critical Temperature Time: 0 minutes 00:15:34.172 00:15:34.172 Number of Queues 00:15:34.172 ================ 00:15:34.172 Number of I/O Submission Queues: 127 00:15:34.172 Number of I/O Completion Queues: 127 00:15:34.172 00:15:34.172 Active Namespaces 00:15:34.172 ================= 00:15:34.172 Namespace ID:1 00:15:34.173 Error Recovery Timeout: Unlimited 00:15:34.173 Command Set Identifier: NVM (00h) 00:15:34.173 Deallocate: Supported 00:15:34.173 Deallocated/Unwritten Error: Not Supported 00:15:34.173 Deallocated Read Value: Unknown 00:15:34.173 Deallocate in Write Zeroes: Not Supported 00:15:34.173 Deallocated Guard Field: 0xFFFF 00:15:34.173 Flush: Supported 00:15:34.173 Reservation: Supported 00:15:34.173 Namespace Sharing Capabilities: Multiple Controllers 00:15:34.173 Size (in LBAs): 131072 (0GiB) 00:15:34.173 Capacity (in LBAs): 131072 (0GiB) 00:15:34.173 Utilization (in LBAs): 131072 (0GiB) 00:15:34.173 NGUID: 7ED34959068942B1AFFA52EFDD524182 00:15:34.173 UUID: 7ed34959-0689-42b1-affa-52efdd524182 00:15:34.173 Thin Provisioning: Not Supported 00:15:34.173 Per-NS Atomic Units: Yes 00:15:34.173 Atomic Boundary Size (Normal): 0 00:15:34.173 Atomic Boundary Size (PFail): 0 00:15:34.173 Atomic Boundary Offset: 0 00:15:34.173 Maximum Single Source Range Length: 65535 00:15:34.173 Maximum Copy Length: 65535 00:15:34.173 Maximum Source Range Count: 1 00:15:34.173 NGUID/EUI64 Never Reused: No 00:15:34.173 Namespace Write Protected: No 00:15:34.173 Number of LBA Formats: 1 00:15:34.173 Current LBA Format: LBA Format #00 00:15:34.173 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:34.173 00:15:34.173 02:56:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:34.173 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.173 [2024-05-13 02:56:24.943226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.437 Initializing NVMe Controllers 00:15:39.437 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:39.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:39.437 Initialization complete. Launching workers. 00:15:39.437 ======================================================== 00:15:39.437 Latency(us) 00:15:39.437 Device Information : IOPS MiB/s Average min max 00:15:39.437 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35588.52 139.02 3595.99 1144.34 7403.31 00:15:39.437 ======================================================== 00:15:39.437 Total : 35588.52 139.02 3595.99 1144.34 7403.31 00:15:39.437 00:15:39.437 [2024-05-13 02:56:30.053099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.437 02:56:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:39.437 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.694 [2024-05-13 02:56:30.293869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.958 Initializing NVMe Controllers 00:15:44.958 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.958 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:44.958 Initialization complete. Launching workers. 00:15:44.958 ======================================================== 00:15:44.958 Latency(us) 00:15:44.958 Device Information : IOPS MiB/s Average min max 00:15:44.958 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32991.80 128.87 3880.73 1186.89 9235.64 00:15:44.958 ======================================================== 00:15:44.958 Total : 32991.80 128.87 3880.73 1186.89 9235.64 00:15:44.958 00:15:44.958 [2024-05-13 02:56:35.314522] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.958 02:56:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:44.958 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.958 [2024-05-13 02:56:35.528337] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.224 [2024-05-13 02:56:40.652836] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.224 Initializing NVMe Controllers 00:15:50.224 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:50.224 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:50.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:50.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:50.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:50.224 Initialization complete. Launching workers. 00:15:50.224 Starting thread on core 2 00:15:50.224 Starting thread on core 3 00:15:50.225 Starting thread on core 1 00:15:50.225 02:56:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:50.225 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.225 [2024-05-13 02:56:40.950219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:53.507 [2024-05-13 02:56:44.006586] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:53.507 Initializing NVMe Controllers 00:15:53.507 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.507 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.507 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:53.507 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:53.507 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:53.507 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:53.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:53.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:53.507 Initialization complete. Launching workers. 00:15:53.507 Starting thread on core 1 with urgent priority queue 00:15:53.507 Starting thread on core 2 with urgent priority queue 00:15:53.507 Starting thread on core 3 with urgent priority queue 00:15:53.507 Starting thread on core 0 with urgent priority queue 00:15:53.507 SPDK bdev Controller (SPDK2 ) core 0: 6551.00 IO/s 15.26 secs/100000 ios 00:15:53.507 SPDK bdev Controller (SPDK2 ) core 1: 7169.00 IO/s 13.95 secs/100000 ios 00:15:53.507 SPDK bdev Controller (SPDK2 ) core 2: 6668.00 IO/s 15.00 secs/100000 ios 00:15:53.507 SPDK bdev Controller (SPDK2 ) core 3: 6790.67 IO/s 14.73 secs/100000 ios 00:15:53.507 ======================================================== 00:15:53.507 00:15:53.507 02:56:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:53.507 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.507 [2024-05-13 02:56:44.298221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:53.507 Initializing NVMe Controllers 00:15:53.507 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.507 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.507 Namespace ID: 1 size: 0GB 00:15:53.507 Initialization complete. 00:15:53.507 INFO: using host memory buffer for IO 00:15:53.507 Hello world! 00:15:53.507 [2024-05-13 02:56:44.307417] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:53.764 02:56:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:53.764 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.020 [2024-05-13 02:56:44.611075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:54.951 Initializing NVMe Controllers 00:15:54.951 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:54.951 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:54.951 Initialization complete. Launching workers. 00:15:54.951 submit (in ns) avg, min, max = 8867.5, 3512.2, 4016348.9 00:15:54.951 complete (in ns) avg, min, max = 23830.4, 2075.6, 7989320.0 00:15:54.951 00:15:54.951 Submit histogram 00:15:54.951 ================ 00:15:54.951 Range in us Cumulative Count 00:15:54.951 3.508 - 3.532: 0.0072% ( 1) 00:15:54.951 3.532 - 3.556: 1.0265% ( 141) 00:15:54.951 3.556 - 3.579: 2.8408% ( 251) 00:15:54.951 3.579 - 3.603: 6.6431% ( 526) 00:15:54.951 3.603 - 3.627: 14.3776% ( 1070) 00:15:54.951 3.627 - 3.650: 25.6397% ( 1558) 00:15:54.951 3.650 - 3.674: 34.9574% ( 1289) 00:15:54.951 3.674 - 3.698: 42.3088% ( 1017) 00:15:54.951 3.698 - 3.721: 48.1278% ( 805) 00:15:54.951 3.721 - 3.745: 53.2890% ( 714) 00:15:54.951 3.745 - 3.769: 58.3345% ( 698) 00:15:54.951 3.769 - 3.793: 62.3536% ( 556) 00:15:54.951 3.793 - 3.816: 65.6715% ( 459) 00:15:54.951 3.816 - 3.840: 68.8015% ( 433) 00:15:54.951 3.840 - 3.864: 72.8929% ( 566) 00:15:54.951 3.864 - 3.887: 77.1071% ( 583) 00:15:54.951 3.887 - 3.911: 80.6708% ( 493) 00:15:54.952 3.911 - 3.935: 83.8658% ( 442) 00:15:54.952 3.935 - 3.959: 86.0200% ( 298) 00:15:54.952 3.959 - 3.982: 87.8199% ( 249) 00:15:54.952 3.982 - 4.006: 89.4680% ( 228) 00:15:54.952 4.006 - 4.030: 90.9065% ( 199) 00:15:54.952 4.030 - 4.053: 91.9980% ( 151) 00:15:54.952 4.053 - 4.077: 93.0100% ( 140) 00:15:54.952 4.077 - 4.101: 93.8774% ( 120) 00:15:54.952 4.101 - 4.124: 94.6725% ( 110) 00:15:54.952 4.124 - 4.148: 95.4749% ( 111) 00:15:54.952 4.148 - 4.172: 96.0171% ( 75) 00:15:54.952 4.172 - 4.196: 96.3496% ( 46) 00:15:54.952 4.196 - 4.219: 96.5592% ( 29) 00:15:54.952 4.219 - 4.243: 96.7182% ( 22) 00:15:54.952 4.243 - 4.267: 96.8483% ( 18) 00:15:54.952 4.267 - 4.290: 97.0074% ( 22) 00:15:54.952 4.290 - 4.314: 97.1158% ( 15) 00:15:54.952 4.314 - 4.338: 97.2387% ( 17) 00:15:54.952 4.338 - 4.361: 97.3110% ( 10) 00:15:54.952 4.361 - 4.385: 97.3616% ( 7) 00:15:54.952 4.385 - 4.409: 97.3688% ( 1) 00:15:54.952 4.409 - 4.433: 97.3760% ( 1) 00:15:54.952 4.433 - 4.456: 97.4194% ( 6) 00:15:54.952 4.456 - 4.480: 97.4266% ( 1) 00:15:54.952 4.480 - 4.504: 97.4339% ( 1) 00:15:54.952 4.504 - 4.527: 97.4411% ( 1) 00:15:54.952 4.527 - 4.551: 97.4628% ( 3) 00:15:54.952 4.575 - 4.599: 97.4700% ( 1) 00:15:54.952 4.622 - 4.646: 97.4845% ( 2) 00:15:54.952 4.646 - 4.670: 97.4917% ( 1) 00:15:54.952 4.693 - 4.717: 97.4989% ( 1) 00:15:54.952 4.741 - 4.764: 97.5351% ( 5) 00:15:54.952 4.764 - 4.788: 97.5567% ( 3) 00:15:54.952 4.788 - 4.812: 97.5929% ( 5) 00:15:54.952 4.812 - 4.836: 97.6073% ( 2) 00:15:54.952 4.836 - 4.859: 97.6290% ( 3) 00:15:54.952 4.859 - 4.883: 97.6507% ( 3) 00:15:54.952 4.883 - 4.907: 97.7013% ( 7) 00:15:54.952 4.907 - 4.930: 97.7519% ( 7) 00:15:54.952 4.954 - 4.978: 97.7953% ( 6) 00:15:54.952 4.978 - 5.001: 97.8314% ( 5) 00:15:54.952 5.001 - 5.025: 97.8531% ( 3) 00:15:54.952 5.025 - 5.049: 97.8893% ( 5) 00:15:54.952 5.049 - 5.073: 97.9109% ( 3) 00:15:54.952 5.073 - 5.096: 97.9326% ( 3) 00:15:54.952 5.096 - 5.120: 97.9615% ( 4) 00:15:54.952 5.120 - 5.144: 98.0121% ( 7) 00:15:54.952 5.144 - 5.167: 98.0411% ( 4) 00:15:54.952 5.167 - 5.191: 98.0700% ( 4) 00:15:54.952 5.191 - 5.215: 98.0917% ( 3) 00:15:54.952 5.215 - 5.239: 98.1133% ( 3) 00:15:54.952 5.239 - 5.262: 98.1206% ( 1) 00:15:54.952 5.262 - 5.286: 98.1423% ( 3) 00:15:54.952 5.286 - 5.310: 98.1567% ( 2) 00:15:54.952 5.310 - 5.333: 98.1639% ( 1) 00:15:54.952 5.333 - 5.357: 98.1856% ( 3) 00:15:54.952 5.357 - 5.381: 98.1929% ( 1) 00:15:54.952 5.381 - 5.404: 98.2001% ( 1) 00:15:54.952 5.404 - 5.428: 98.2073% ( 1) 00:15:54.952 5.476 - 5.499: 98.2145% ( 1) 00:15:54.952 5.594 - 5.618: 98.2218% ( 1) 00:15:54.952 5.618 - 5.641: 98.2362% ( 2) 00:15:54.952 5.760 - 5.784: 98.2579% ( 3) 00:15:54.952 6.258 - 6.305: 98.2724% ( 2) 00:15:54.952 6.305 - 6.353: 98.2796% ( 1) 00:15:54.952 6.447 - 6.495: 98.2868% ( 1) 00:15:54.952 6.495 - 6.542: 98.2941% ( 1) 00:15:54.952 6.542 - 6.590: 98.3013% ( 1) 00:15:54.952 6.684 - 6.732: 98.3085% ( 1) 00:15:54.952 6.874 - 6.921: 98.3157% ( 1) 00:15:54.952 6.969 - 7.016: 98.3230% ( 1) 00:15:54.952 7.064 - 7.111: 98.3447% ( 3) 00:15:54.952 7.206 - 7.253: 98.3519% ( 1) 00:15:54.952 7.253 - 7.301: 98.3591% ( 1) 00:15:54.952 7.301 - 7.348: 98.3736% ( 2) 00:15:54.952 7.396 - 7.443: 98.3953% ( 3) 00:15:54.952 7.443 - 7.490: 98.4097% ( 2) 00:15:54.952 7.490 - 7.538: 98.4242% ( 2) 00:15:54.952 7.538 - 7.585: 98.4314% ( 1) 00:15:54.952 7.585 - 7.633: 98.4603% ( 4) 00:15:54.952 7.633 - 7.680: 98.4675% ( 1) 00:15:54.952 7.680 - 7.727: 98.4965% ( 4) 00:15:54.952 7.727 - 7.775: 98.5037% ( 1) 00:15:54.952 7.822 - 7.870: 98.5109% ( 1) 00:15:54.952 7.964 - 8.012: 98.5181% ( 1) 00:15:54.952 8.012 - 8.059: 98.5326% ( 2) 00:15:54.952 8.059 - 8.107: 98.5398% ( 1) 00:15:54.952 8.107 - 8.154: 98.5471% ( 1) 00:15:54.952 8.154 - 8.201: 98.5543% ( 1) 00:15:54.952 8.201 - 8.249: 98.5615% ( 1) 00:15:54.952 8.249 - 8.296: 98.5687% ( 1) 00:15:54.952 8.296 - 8.344: 98.5760% ( 1) 00:15:54.952 8.391 - 8.439: 98.5832% ( 1) 00:15:54.952 8.439 - 8.486: 98.5904% ( 1) 00:15:54.952 8.486 - 8.533: 98.5977% ( 1) 00:15:54.952 8.533 - 8.581: 98.6121% ( 2) 00:15:54.952 8.581 - 8.628: 98.6193% ( 1) 00:15:54.952 8.723 - 8.770: 98.6410% ( 3) 00:15:54.952 8.770 - 8.818: 98.6483% ( 1) 00:15:54.952 8.865 - 8.913: 98.6555% ( 1) 00:15:54.952 8.913 - 8.960: 98.6699% ( 2) 00:15:54.952 8.960 - 9.007: 98.6772% ( 1) 00:15:54.952 9.102 - 9.150: 98.6844% ( 1) 00:15:54.952 9.197 - 9.244: 98.6916% ( 1) 00:15:54.952 9.292 - 9.339: 98.6989% ( 1) 00:15:54.952 9.339 - 9.387: 98.7061% ( 1) 00:15:54.952 9.766 - 9.813: 98.7133% ( 1) 00:15:54.952 9.908 - 9.956: 98.7205% ( 1) 00:15:54.952 10.003 - 10.050: 98.7278% ( 1) 00:15:54.952 10.050 - 10.098: 98.7350% ( 1) 00:15:54.952 10.287 - 10.335: 98.7422% ( 1) 00:15:54.952 10.335 - 10.382: 98.7495% ( 1) 00:15:54.952 10.477 - 10.524: 98.7567% ( 1) 00:15:54.952 10.572 - 10.619: 98.7639% ( 1) 00:15:54.952 11.425 - 11.473: 98.7784% ( 2) 00:15:54.952 11.804 - 11.852: 98.7928% ( 2) 00:15:54.952 11.852 - 11.899: 98.8001% ( 1) 00:15:54.952 11.899 - 11.947: 98.8073% ( 1) 00:15:54.952 11.947 - 11.994: 98.8217% ( 2) 00:15:54.952 12.231 - 12.326: 98.8290% ( 1) 00:15:54.952 12.326 - 12.421: 98.8362% ( 1) 00:15:54.953 12.610 - 12.705: 98.8434% ( 1) 00:15:54.953 13.179 - 13.274: 98.8507% ( 1) 00:15:54.953 13.274 - 13.369: 98.8579% ( 1) 00:15:54.953 13.369 - 13.464: 98.8723% ( 2) 00:15:54.953 13.653 - 13.748: 98.8940% ( 3) 00:15:54.953 13.938 - 14.033: 98.9013% ( 1) 00:15:54.953 14.033 - 14.127: 98.9157% ( 2) 00:15:54.953 14.127 - 14.222: 98.9229% ( 1) 00:15:54.953 14.222 - 14.317: 98.9519% ( 4) 00:15:54.953 14.601 - 14.696: 98.9663% ( 2) 00:15:54.953 14.791 - 14.886: 98.9735% ( 1) 00:15:54.953 16.877 - 16.972: 98.9808% ( 1) 00:15:54.953 17.067 - 17.161: 98.9952% ( 2) 00:15:54.953 17.256 - 17.351: 99.0097% ( 2) 00:15:54.953 17.351 - 17.446: 99.0169% ( 1) 00:15:54.953 17.446 - 17.541: 99.0458% ( 4) 00:15:54.953 17.541 - 17.636: 99.0747% ( 4) 00:15:54.953 17.636 - 17.730: 99.1253% ( 7) 00:15:54.953 17.730 - 17.825: 99.1398% ( 2) 00:15:54.953 17.825 - 17.920: 99.1687% ( 4) 00:15:54.953 17.920 - 18.015: 99.2193% ( 7) 00:15:54.953 18.015 - 18.110: 99.2771% ( 8) 00:15:54.953 18.110 - 18.204: 99.3133% ( 5) 00:15:54.953 18.204 - 18.299: 99.3856% ( 10) 00:15:54.953 18.299 - 18.394: 99.4579% ( 10) 00:15:54.953 18.394 - 18.489: 99.5880% ( 18) 00:15:54.953 18.489 - 18.584: 99.6241% ( 5) 00:15:54.953 18.584 - 18.679: 99.6747% ( 7) 00:15:54.953 18.679 - 18.773: 99.6819% ( 1) 00:15:54.953 18.773 - 18.868: 99.6964% ( 2) 00:15:54.953 18.868 - 18.963: 99.7542% ( 8) 00:15:54.953 18.963 - 19.058: 99.7687% ( 2) 00:15:54.953 19.058 - 19.153: 99.7904% ( 3) 00:15:54.953 19.153 - 19.247: 99.7976% ( 1) 00:15:54.953 19.342 - 19.437: 99.8121% ( 2) 00:15:54.953 19.911 - 20.006: 99.8193% ( 1) 00:15:54.953 20.006 - 20.101: 99.8337% ( 2) 00:15:54.953 20.480 - 20.575: 99.8410% ( 1) 00:15:54.953 21.618 - 21.713: 99.8482% ( 1) 00:15:54.953 22.471 - 22.566: 99.8554% ( 1) 00:15:54.953 22.566 - 22.661: 99.8627% ( 1) 00:15:54.953 22.756 - 22.850: 99.8699% ( 1) 00:15:54.953 26.169 - 26.359: 99.8771% ( 1) 00:15:54.953 3980.705 - 4004.978: 99.9783% ( 14) 00:15:54.953 4004.978 - 4029.250: 100.0000% ( 3) 00:15:54.953 00:15:54.953 Complete histogram 00:15:54.953 ================== 00:15:54.953 Range in us Cumulative Count 00:15:54.953 2.074 - 2.086: 1.0626% ( 147) 00:15:54.953 2.086 - 2.098: 13.6258% ( 1738) 00:15:54.953 2.098 - 2.110: 30.6130% ( 2350) 00:15:54.953 2.110 - 2.121: 38.0512% ( 1029) 00:15:54.953 2.121 - 2.133: 51.8361% ( 1907) 00:15:54.953 2.133 - 2.145: 58.0454% ( 859) 00:15:54.953 2.145 - 2.157: 61.2766% ( 447) 00:15:54.953 2.157 - 2.169: 67.2690% ( 829) 00:15:54.953 2.169 - 2.181: 70.7098% ( 476) 00:15:54.953 2.181 - 2.193: 73.8470% ( 434) 00:15:54.953 2.193 - 2.204: 79.9263% ( 841) 00:15:54.953 2.204 - 2.216: 83.1068% ( 440) 00:15:54.953 2.216 - 2.228: 84.5236% ( 196) 00:15:54.953 2.228 - 2.240: 87.1476% ( 363) 00:15:54.953 2.240 - 2.252: 89.1210% ( 273) 00:15:54.953 2.252 - 2.264: 90.2559% ( 157) 00:15:54.953 2.264 - 2.276: 92.5907% ( 323) 00:15:54.953 2.276 - 2.287: 94.3617% ( 245) 00:15:54.953 2.287 - 2.299: 95.0267% ( 92) 00:15:54.953 2.299 - 2.311: 95.4677% ( 61) 00:15:54.953 2.311 - 2.323: 95.6339% ( 23) 00:15:54.953 2.323 - 2.335: 95.7641% ( 18) 00:15:54.953 2.335 - 2.347: 95.9520% ( 26) 00:15:54.953 2.347 - 2.359: 96.2411% ( 40) 00:15:54.953 2.359 - 2.370: 96.5881% ( 48) 00:15:54.953 2.370 - 2.382: 96.8989% ( 43) 00:15:54.953 2.382 - 2.394: 97.2242% ( 45) 00:15:54.953 2.394 - 2.406: 97.4772% ( 35) 00:15:54.953 2.406 - 2.418: 97.6724% ( 27) 00:15:54.953 2.418 - 2.430: 97.8459% ( 24) 00:15:54.953 2.430 - 2.441: 97.9760% ( 18) 00:15:54.953 2.441 - 2.453: 98.0555% ( 11) 00:15:54.953 2.453 - 2.465: 98.1495% ( 13) 00:15:54.953 2.465 - 2.477: 98.2435% ( 13) 00:15:54.953 2.477 - 2.489: 98.3013% ( 8) 00:15:54.953 2.489 - 2.501: 98.3663% ( 9) 00:15:54.953 2.501 - 2.513: 98.4242% ( 8) 00:15:54.953 2.524 - 2.536: 98.4459% ( 3) 00:15:54.953 2.548 - 2.560: 98.4675% ( 3) 00:15:54.953 2.560 - 2.572: 98.4748% ( 1) 00:15:54.953 2.584 - 2.596: 98.4820% ( 1) 00:15:54.953 2.607 - 2.619: 98.5037% ( 3) 00:15:54.953 2.619 - 2.631: 98.5109% ( 1) 00:15:54.953 2.667 - 2.679: 98.5181% ( 1) 00:15:54.953 2.679 - 2.690: 98.5254% ( 1) 00:15:54.953 2.690 - 2.702: 98.5326% ( 1) 00:15:54.953 2.702 - 2.714: 98.5398% ( 1) 00:15:54.953 2.761 - 2.773: 98.5471% ( 1) 00:15:54.953 3.390 - 3.413: 98.5615% ( 2) 00:15:54.953 3.437 - 3.461: 98.5687% ( 1) 00:15:54.953 3.461 - 3.484: 98.6049% ( 5) 00:15:54.953 3.484 - 3.508: 98.6193% ( 2) 00:15:54.953 3.556 - 3.579: 98.6266% ( 1) 00:15:54.953 3.603 - 3.627: 98.6338% ( 1) 00:15:54.953 3.627 - 3.650: 98.6410% ( 1) 00:15:54.953 3.650 - 3.674: 98.6555% ( 2) 00:15:54.953 3.674 - 3.698: 98.6627% ( 1) 00:15:54.953 3.721 - 3.745: 98.6844% ( 3) 00:15:54.953 3.745 - 3.769: 98.6916% ( 1) 00:15:54.953 3.911 - 3.935: 98.7061% ( 2) 00:15:54.953 3.935 - 3.959: 98.7205% ( 2) 00:15:54.953 3.982 - 4.006: 98.7278% ( 1) 00:15:54.953 4.053 - 4.077: 98.7350% ( 1) 00:15:54.953 5.120 - 5.144: 98.7422% ( 1) 00:15:54.953 5.381 - 5.404: 98.7495% ( 1) 00:15:54.953 5.499 - 5.523: 98.7567% ( 1) 00:15:54.953 5.594 - 5.618: 98.7711% ( 2) 00:15:54.953 5.641 - 5.665: 98.7784% ( 1) 00:15:54.953 5.831 - 5.855: 9[2024-05-13 02:56:45.713508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.211 8.7856% ( 1) 00:15:55.211 5.855 - 5.879: 98.7928% ( 1) 00:15:55.211 5.926 - 5.950: 98.8001% ( 1) 00:15:55.211 6.068 - 6.116: 98.8145% ( 2) 00:15:55.211 6.495 - 6.542: 98.8217% ( 1) 00:15:55.211 6.542 - 6.590: 98.8290% ( 1) 00:15:55.211 6.590 - 6.637: 98.8362% ( 1) 00:15:55.211 6.637 - 6.684: 98.8434% ( 1) 00:15:55.211 6.684 - 6.732: 98.8507% ( 1) 00:15:55.211 6.921 - 6.969: 98.8579% ( 1) 00:15:55.211 7.585 - 7.633: 98.8651% ( 1) 00:15:55.211 7.964 - 8.012: 98.8723% ( 1) 00:15:55.211 9.007 - 9.055: 98.8796% ( 1) 00:15:55.211 15.360 - 15.455: 98.8868% ( 1) 00:15:55.211 15.455 - 15.550: 98.9013% ( 2) 00:15:55.211 15.739 - 15.834: 98.9085% ( 1) 00:15:55.211 15.834 - 15.929: 98.9374% ( 4) 00:15:55.212 15.929 - 16.024: 98.9880% ( 7) 00:15:55.212 16.024 - 16.119: 99.0386% ( 7) 00:15:55.212 16.119 - 16.213: 99.0747% ( 5) 00:15:55.212 16.213 - 16.308: 99.1253% ( 7) 00:15:55.212 16.308 - 16.403: 99.1543% ( 4) 00:15:55.212 16.403 - 16.498: 99.1976% ( 6) 00:15:55.212 16.498 - 16.593: 99.2627% ( 9) 00:15:55.212 16.593 - 16.687: 99.2844% ( 3) 00:15:55.212 16.687 - 16.782: 99.2988% ( 2) 00:15:55.212 16.782 - 16.877: 99.3277% ( 4) 00:15:55.212 16.877 - 16.972: 99.3567% ( 4) 00:15:55.212 16.972 - 17.067: 99.3856% ( 4) 00:15:55.212 17.067 - 17.161: 99.4000% ( 2) 00:15:55.212 17.161 - 17.256: 99.4073% ( 1) 00:15:55.212 17.636 - 17.730: 99.4217% ( 2) 00:15:55.212 17.730 - 17.825: 99.4362% ( 2) 00:15:55.212 17.825 - 17.920: 99.4434% ( 1) 00:15:55.212 17.920 - 18.015: 99.4506% ( 1) 00:15:55.212 18.299 - 18.394: 99.4579% ( 1) 00:15:55.212 18.868 - 18.963: 99.4651% ( 1) 00:15:55.212 3179.710 - 3203.982: 99.4723% ( 1) 00:15:55.212 3543.799 - 3568.071: 99.4795% ( 1) 00:15:55.212 3616.616 - 3640.889: 99.4868% ( 1) 00:15:55.212 3980.705 - 4004.978: 99.8121% ( 45) 00:15:55.212 4004.978 - 4029.250: 99.9855% ( 24) 00:15:55.212 4029.250 - 4053.523: 99.9928% ( 1) 00:15:55.212 7961.410 - 8009.956: 100.0000% ( 1) 00:15:55.212 00:15:55.212 02:56:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:55.212 02:56:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:55.212 02:56:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:55.212 02:56:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:55.212 02:56:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:55.470 [ 00:15:55.470 { 00:15:55.470 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:55.470 "subtype": "Discovery", 00:15:55.470 "listen_addresses": [], 00:15:55.470 "allow_any_host": true, 00:15:55.470 "hosts": [] 00:15:55.470 }, 00:15:55.470 { 00:15:55.470 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:55.470 "subtype": "NVMe", 00:15:55.470 "listen_addresses": [ 00:15:55.470 { 00:15:55.470 "trtype": "VFIOUSER", 00:15:55.470 "adrfam": "IPv4", 00:15:55.470 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:55.470 "trsvcid": "0" 00:15:55.470 } 00:15:55.470 ], 00:15:55.470 "allow_any_host": true, 00:15:55.470 "hosts": [], 00:15:55.470 "serial_number": "SPDK1", 00:15:55.470 "model_number": "SPDK bdev Controller", 00:15:55.470 "max_namespaces": 32, 00:15:55.470 "min_cntlid": 1, 00:15:55.470 "max_cntlid": 65519, 00:15:55.470 "namespaces": [ 00:15:55.470 { 00:15:55.470 "nsid": 1, 00:15:55.470 "bdev_name": "Malloc1", 00:15:55.470 "name": "Malloc1", 00:15:55.470 "nguid": "53B09E863CD746DEB197BB089D71F398", 00:15:55.470 "uuid": "53b09e86-3cd7-46de-b197-bb089d71f398" 00:15:55.470 }, 00:15:55.470 { 00:15:55.470 "nsid": 2, 00:15:55.470 "bdev_name": "Malloc3", 00:15:55.470 "name": "Malloc3", 00:15:55.470 "nguid": "95E7A8EC1D3644BBAD36AA1F7BB8E721", 00:15:55.470 "uuid": "95e7a8ec-1d36-44bb-ad36-aa1f7bb8e721" 00:15:55.470 } 00:15:55.470 ] 00:15:55.470 }, 00:15:55.470 { 00:15:55.470 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:55.470 "subtype": "NVMe", 00:15:55.470 "listen_addresses": [ 00:15:55.470 { 00:15:55.470 "trtype": "VFIOUSER", 00:15:55.470 "adrfam": "IPv4", 00:15:55.470 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:55.470 "trsvcid": "0" 00:15:55.470 } 00:15:55.470 ], 00:15:55.470 "allow_any_host": true, 00:15:55.470 "hosts": [], 00:15:55.470 "serial_number": "SPDK2", 00:15:55.470 "model_number": "SPDK bdev Controller", 00:15:55.470 "max_namespaces": 32, 00:15:55.470 "min_cntlid": 1, 00:15:55.470 "max_cntlid": 65519, 00:15:55.470 "namespaces": [ 00:15:55.470 { 00:15:55.470 "nsid": 1, 00:15:55.470 "bdev_name": "Malloc2", 00:15:55.470 "name": "Malloc2", 00:15:55.470 "nguid": "7ED34959068942B1AFFA52EFDD524182", 00:15:55.470 "uuid": "7ed34959-0689-42b1-affa-52efdd524182" 00:15:55.470 } 00:15:55.470 ] 00:15:55.470 } 00:15:55.470 ] 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=321687 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:55.470 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:55.470 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.470 [2024-05-13 02:56:46.202587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.729 Malloc4 00:15:55.729 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:55.987 [2024-05-13 02:56:46.567296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.987 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:55.987 Asynchronous Event Request test 00:15:55.987 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.987 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.987 Registering asynchronous event callbacks... 00:15:55.987 Starting namespace attribute notice tests for all controllers... 00:15:55.987 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:55.987 aer_cb - Changed Namespace 00:15:55.987 Cleaning up... 00:15:56.246 [ 00:15:56.246 { 00:15:56.246 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:56.246 "subtype": "Discovery", 00:15:56.246 "listen_addresses": [], 00:15:56.246 "allow_any_host": true, 00:15:56.246 "hosts": [] 00:15:56.246 }, 00:15:56.246 { 00:15:56.246 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:56.246 "subtype": "NVMe", 00:15:56.246 "listen_addresses": [ 00:15:56.246 { 00:15:56.246 "trtype": "VFIOUSER", 00:15:56.246 "adrfam": "IPv4", 00:15:56.246 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:56.246 "trsvcid": "0" 00:15:56.246 } 00:15:56.246 ], 00:15:56.246 "allow_any_host": true, 00:15:56.246 "hosts": [], 00:15:56.246 "serial_number": "SPDK1", 00:15:56.246 "model_number": "SPDK bdev Controller", 00:15:56.246 "max_namespaces": 32, 00:15:56.246 "min_cntlid": 1, 00:15:56.246 "max_cntlid": 65519, 00:15:56.246 "namespaces": [ 00:15:56.246 { 00:15:56.246 "nsid": 1, 00:15:56.246 "bdev_name": "Malloc1", 00:15:56.246 "name": "Malloc1", 00:15:56.246 "nguid": "53B09E863CD746DEB197BB089D71F398", 00:15:56.246 "uuid": "53b09e86-3cd7-46de-b197-bb089d71f398" 00:15:56.246 }, 00:15:56.246 { 00:15:56.246 "nsid": 2, 00:15:56.246 "bdev_name": "Malloc3", 00:15:56.246 "name": "Malloc3", 00:15:56.246 "nguid": "95E7A8EC1D3644BBAD36AA1F7BB8E721", 00:15:56.246 "uuid": "95e7a8ec-1d36-44bb-ad36-aa1f7bb8e721" 00:15:56.246 } 00:15:56.246 ] 00:15:56.246 }, 00:15:56.246 { 00:15:56.246 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:56.246 "subtype": "NVMe", 00:15:56.246 "listen_addresses": [ 00:15:56.246 { 00:15:56.246 "trtype": "VFIOUSER", 00:15:56.246 "adrfam": "IPv4", 00:15:56.246 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:56.246 "trsvcid": "0" 00:15:56.246 } 00:15:56.246 ], 00:15:56.246 "allow_any_host": true, 00:15:56.246 "hosts": [], 00:15:56.246 "serial_number": "SPDK2", 00:15:56.246 "model_number": "SPDK bdev Controller", 00:15:56.246 "max_namespaces": 32, 00:15:56.246 "min_cntlid": 1, 00:15:56.246 "max_cntlid": 65519, 00:15:56.246 "namespaces": [ 00:15:56.246 { 00:15:56.246 "nsid": 1, 00:15:56.246 "bdev_name": "Malloc2", 00:15:56.246 "name": "Malloc2", 00:15:56.246 "nguid": "7ED34959068942B1AFFA52EFDD524182", 00:15:56.247 "uuid": "7ed34959-0689-42b1-affa-52efdd524182" 00:15:56.247 }, 00:15:56.247 { 00:15:56.247 "nsid": 2, 00:15:56.247 "bdev_name": "Malloc4", 00:15:56.247 "name": "Malloc4", 00:15:56.247 "nguid": "C53150A13A0447C1B6926ED450B0A6FF", 00:15:56.247 "uuid": "c53150a1-3a04-47c1-b692-6ed450b0a6ff" 00:15:56.247 } 00:15:56.247 ] 00:15:56.247 } 00:15:56.247 ] 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 321687 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 316106 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 316106 ']' 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 316106 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 316106 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 316106' 00:15:56.247 killing process with pid 316106 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 316106 00:15:56.247 [2024-05-13 02:56:46.855308] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:56.247 02:56:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 316106 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=321830 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 321830' 00:15:56.505 Process pid: 321830 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 321830 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:56.505 02:56:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 321830 ']' 00:15:56.506 02:56:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.506 02:56:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:56.506 02:56:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.506 02:56:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:56.506 02:56:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:56.506 [2024-05-13 02:56:47.210716] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:56.506 [2024-05-13 02:56:47.211784] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:15:56.506 [2024-05-13 02:56:47.211861] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.506 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.506 [2024-05-13 02:56:47.245242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:56.506 [2024-05-13 02:56:47.272688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.766 [2024-05-13 02:56:47.364993] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.766 [2024-05-13 02:56:47.365056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.766 [2024-05-13 02:56:47.365072] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.766 [2024-05-13 02:56:47.365086] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.766 [2024-05-13 02:56:47.365113] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.766 [2024-05-13 02:56:47.365169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.766 [2024-05-13 02:56:47.365236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.766 [2024-05-13 02:56:47.365329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.766 [2024-05-13 02:56:47.365331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.766 [2024-05-13 02:56:47.465274] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:56.766 [2024-05-13 02:56:47.465502] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:56.766 [2024-05-13 02:56:47.465814] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:56.766 [2024-05-13 02:56:47.466532] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:56.766 [2024-05-13 02:56:47.466635] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:56.766 02:56:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:56.766 02:56:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:56.766 02:56:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:57.704 02:56:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:57.962 02:56:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:57.962 02:56:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:57.962 02:56:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:57.962 02:56:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:57.962 02:56:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:58.221 Malloc1 00:15:58.221 02:56:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:58.480 02:56:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:58.739 02:56:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:58.998 [2024-05-13 02:56:49.741935] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:58.998 02:56:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:58.998 02:56:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:58.998 02:56:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:59.289 Malloc2 00:15:59.289 02:56:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:59.547 02:56:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:59.805 02:56:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 321830 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 321830 ']' 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 321830 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 321830 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 321830' 00:16:00.065 killing process with pid 321830 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 321830 00:16:00.065 [2024-05-13 02:56:50.813221] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:00.065 02:56:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 321830 00:16:00.324 02:56:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:00.324 02:56:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:00.324 00:16:00.324 real 0m52.476s 00:16:00.324 user 3m27.330s 00:16:00.324 sys 0m4.265s 00:16:00.324 02:56:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:00.324 02:56:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:00.324 ************************************ 00:16:00.324 END TEST nvmf_vfio_user 00:16:00.324 ************************************ 00:16:00.583 02:56:51 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:00.583 02:56:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:00.583 02:56:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.583 02:56:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.583 ************************************ 00:16:00.583 START TEST nvmf_vfio_user_nvme_compliance 00:16:00.583 ************************************ 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:00.583 * Looking for test storage... 00:16:00.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.583 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=322320 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 322320' 00:16:00.584 Process pid: 322320 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 322320 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 322320 ']' 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:00.584 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:00.584 [2024-05-13 02:56:51.271222] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:16:00.584 [2024-05-13 02:56:51.271326] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.584 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.584 [2024-05-13 02:56:51.302877] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:00.584 [2024-05-13 02:56:51.330561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:00.842 [2024-05-13 02:56:51.417263] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.843 [2024-05-13 02:56:51.417312] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.843 [2024-05-13 02:56:51.417334] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.843 [2024-05-13 02:56:51.417345] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.843 [2024-05-13 02:56:51.417355] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.843 [2024-05-13 02:56:51.417455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.843 [2024-05-13 02:56:51.420716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.843 [2024-05-13 02:56:51.420728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.843 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:00.843 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:16:00.843 02:56:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.783 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.044 malloc0 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.044 [2024-05-13 02:56:52.607493] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.044 02:56:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:02.044 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.044 00:16:02.044 00:16:02.044 CUnit - A unit testing framework for C - Version 2.1-3 00:16:02.044 http://cunit.sourceforge.net/ 00:16:02.044 00:16:02.044 00:16:02.044 Suite: nvme_compliance 00:16:02.044 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-13 02:56:52.772521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.044 [2024-05-13 02:56:52.774041] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:02.044 [2024-05-13 02:56:52.774076] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:02.044 [2024-05-13 02:56:52.774089] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:02.044 [2024-05-13 02:56:52.775542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.044 passed 00:16:02.305 Test: admin_identify_ctrlr_verify_fused ...[2024-05-13 02:56:52.867232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.305 [2024-05-13 02:56:52.870254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.305 passed 00:16:02.305 Test: admin_identify_ns ...[2024-05-13 02:56:52.959921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.305 [2024-05-13 02:56:53.021718] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:02.305 [2024-05-13 02:56:53.029716] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:02.305 [2024-05-13 02:56:53.050858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.305 passed 00:16:02.565 Test: admin_get_features_mandatory_features ...[2024-05-13 02:56:53.135133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.565 [2024-05-13 02:56:53.139160] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.565 passed 00:16:02.565 Test: admin_get_features_optional_features ...[2024-05-13 02:56:53.223720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.565 [2024-05-13 02:56:53.226746] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.566 passed 00:16:02.566 Test: admin_set_features_number_of_queues ...[2024-05-13 02:56:53.314726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.826 [2024-05-13 02:56:53.421956] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.826 passed 00:16:02.826 Test: admin_get_log_page_mandatory_logs ...[2024-05-13 02:56:53.507686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.826 [2024-05-13 02:56:53.510700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.826 passed 00:16:02.826 Test: admin_get_log_page_with_lpo ...[2024-05-13 02:56:53.595447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.084 [2024-05-13 02:56:53.662712] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:03.084 [2024-05-13 02:56:53.675783] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.084 passed 00:16:03.084 Test: fabric_property_get ...[2024-05-13 02:56:53.759464] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.084 [2024-05-13 02:56:53.760747] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:03.084 [2024-05-13 02:56:53.762489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.084 passed 00:16:03.084 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-13 02:56:53.849081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.084 [2024-05-13 02:56:53.850333] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:03.084 [2024-05-13 02:56:53.852101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.084 passed 00:16:03.341 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-13 02:56:53.933211] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.341 [2024-05-13 02:56:54.020706] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:03.341 [2024-05-13 02:56:54.036708] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:03.341 [2024-05-13 02:56:54.041806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.341 passed 00:16:03.341 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-13 02:56:54.123490] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.342 [2024-05-13 02:56:54.124782] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:03.342 [2024-05-13 02:56:54.126509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.600 passed 00:16:03.600 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-13 02:56:54.212501] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.600 [2024-05-13 02:56:54.287707] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:03.600 [2024-05-13 02:56:54.311722] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:03.600 [2024-05-13 02:56:54.316807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.600 passed 00:16:03.858 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-13 02:56:54.404137] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.858 [2024-05-13 02:56:54.405416] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:03.858 [2024-05-13 02:56:54.405456] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:03.858 [2024-05-13 02:56:54.407160] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.858 passed 00:16:03.858 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-13 02:56:54.490295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.858 [2024-05-13 02:56:54.579709] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:03.858 [2024-05-13 02:56:54.587733] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:03.858 [2024-05-13 02:56:54.595722] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:03.858 [2024-05-13 02:56:54.603718] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:03.858 [2024-05-13 02:56:54.632822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.118 passed 00:16:04.118 Test: admin_create_io_sq_verify_pc ...[2024-05-13 02:56:54.719272] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.118 [2024-05-13 02:56:54.735719] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:04.118 [2024-05-13 02:56:54.753582] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.118 passed 00:16:04.118 Test: admin_create_io_qp_max_qps ...[2024-05-13 02:56:54.838218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.497 [2024-05-13 02:56:55.951712] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:05.757 [2024-05-13 02:56:56.331027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.757 passed 00:16:05.757 Test: admin_create_io_sq_shared_cq ...[2024-05-13 02:56:56.412796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.757 [2024-05-13 02:56:56.547718] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:06.018 [2024-05-13 02:56:56.584808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.018 passed 00:16:06.018 00:16:06.018 Run Summary: Type Total Ran Passed Failed Inactive 00:16:06.018 suites 1 1 n/a 0 0 00:16:06.018 tests 18 18 18 0 0 00:16:06.018 asserts 360 360 360 0 n/a 00:16:06.018 00:16:06.018 Elapsed time = 1.583 seconds 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 322320 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 322320 ']' 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 322320 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 322320 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 322320' 00:16:06.018 killing process with pid 322320 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 322320 00:16:06.018 [2024-05-13 02:56:56.666403] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:06.018 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 322320 00:16:06.277 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:06.277 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:06.277 00:16:06.277 real 0m5.766s 00:16:06.277 user 0m16.247s 00:16:06.277 sys 0m0.540s 00:16:06.277 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:06.277 02:56:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.277 ************************************ 00:16:06.277 END TEST nvmf_vfio_user_nvme_compliance 00:16:06.277 ************************************ 00:16:06.277 02:56:56 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:06.277 02:56:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:06.277 02:56:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:06.277 02:56:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:06.277 ************************************ 00:16:06.277 START TEST nvmf_vfio_user_fuzz 00:16:06.277 ************************************ 00:16:06.277 02:56:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:06.277 * Looking for test storage... 00:16:06.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.277 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=323033 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 323033' 00:16:06.278 Process pid: 323033 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 323033 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 323033 ']' 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:06.278 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:06.847 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:06.847 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:16:06.847 02:56:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:07.787 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:07.787 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.787 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.787 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.787 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:07.787 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:07.787 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.787 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.787 malloc0 00:16:07.787 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:07.788 02:56:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:39.884 Fuzzing completed. Shutting down the fuzz application 00:16:39.884 00:16:39.884 Dumping successful admin opcodes: 00:16:39.884 8, 9, 10, 24, 00:16:39.884 Dumping successful io opcodes: 00:16:39.884 0, 00:16:39.884 NS: 0x200003a1ef00 I/O qp, Total commands completed: 592970, total successful commands: 2294, random_seed: 1879339200 00:16:39.884 NS: 0x200003a1ef00 admin qp, Total commands completed: 95249, total successful commands: 772, random_seed: 4261798272 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 323033 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 323033 ']' 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 323033 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 323033 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 323033' 00:16:39.884 killing process with pid 323033 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 323033 00:16:39.884 02:57:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 323033 00:16:39.884 02:57:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:39.884 02:57:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:39.884 00:16:39.884 real 0m32.177s 00:16:39.884 user 0m30.342s 00:16:39.884 sys 0m29.222s 00:16:39.884 02:57:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:39.884 02:57:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 ************************************ 00:16:39.884 END TEST nvmf_vfio_user_fuzz 00:16:39.884 ************************************ 00:16:39.884 02:57:29 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:39.884 02:57:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:39.884 02:57:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:39.884 02:57:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 ************************************ 00:16:39.884 START TEST nvmf_host_management 00:16:39.884 ************************************ 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:39.884 * Looking for test storage... 00:16:39.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.884 02:57:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.456 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.456 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.456 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.456 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.456 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.456 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.456 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.456 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:16:40.457 00:16:40.457 --- 10.0.0.2 ping statistics --- 00:16:40.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.457 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:16:40.457 00:16:40.457 --- 10.0.0.1 ping statistics --- 00:16:40.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.457 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.457 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=329090 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 329090 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 329090 ']' 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:40.717 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.717 [2024-05-13 02:57:31.323441] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:16:40.717 [2024-05-13 02:57:31.323514] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.717 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.717 [2024-05-13 02:57:31.363121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:40.717 [2024-05-13 02:57:31.389665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.717 [2024-05-13 02:57:31.479862] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.717 [2024-05-13 02:57:31.479914] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.717 [2024-05-13 02:57:31.479927] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.717 [2024-05-13 02:57:31.479938] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.717 [2024-05-13 02:57:31.479947] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.717 [2024-05-13 02:57:31.480036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.717 [2024-05-13 02:57:31.480097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.717 [2024-05-13 02:57:31.480166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.717 [2024-05-13 02:57:31.480168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 [2024-05-13 02:57:31.622279] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 Malloc0 00:16:40.976 [2024-05-13 02:57:31.680885] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:40.976 [2024-05-13 02:57:31.681210] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=329143 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 329143 /var/tmp/bdevperf.sock 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 329143 ']' 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:40.976 { 00:16:40.976 "params": { 00:16:40.976 "name": "Nvme$subsystem", 00:16:40.976 "trtype": "$TEST_TRANSPORT", 00:16:40.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.976 "adrfam": "ipv4", 00:16:40.976 "trsvcid": "$NVMF_PORT", 00:16:40.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.976 "hdgst": ${hdgst:-false}, 00:16:40.976 "ddgst": ${ddgst:-false} 00:16:40.976 }, 00:16:40.976 "method": "bdev_nvme_attach_controller" 00:16:40.976 } 00:16:40.976 EOF 00:16:40.976 )") 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:40.976 02:57:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:40.976 "params": { 00:16:40.976 "name": "Nvme0", 00:16:40.976 "trtype": "tcp", 00:16:40.976 "traddr": "10.0.0.2", 00:16:40.976 "adrfam": "ipv4", 00:16:40.976 "trsvcid": "4420", 00:16:40.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:40.976 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:40.976 "hdgst": false, 00:16:40.976 "ddgst": false 00:16:40.976 }, 00:16:40.976 "method": "bdev_nvme_attach_controller" 00:16:40.976 }' 00:16:40.977 [2024-05-13 02:57:31.751390] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:16:40.977 [2024-05-13 02:57:31.751485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329143 ] 00:16:41.235 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.235 [2024-05-13 02:57:31.785780] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:41.235 [2024-05-13 02:57:31.815319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.235 [2024-05-13 02:57:31.902529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.494 Running I/O for 10 seconds... 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:41.494 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=322 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 322 -ge 100 ']' 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.755 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.755 [2024-05-13 02:57:32.549105] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549173] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549187] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549199] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549227] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549240] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549253] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549265] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549277] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549289] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549301] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549314] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549326] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549338] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549350] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549361] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549374] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549387] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549398] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.755 [2024-05-13 02:57:32.549410] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549422] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549434] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549454] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549467] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549479] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549491] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549503] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549515] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549527] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549538] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549551] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549562] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549574] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549586] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549598] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549610] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549621] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549634] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549646] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549658] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549670] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549682] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549705] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549720] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549733] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549754] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549766] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549779] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549791] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549807] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549819] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549831] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549843] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549855] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549866] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549878] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549890] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549902] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549914] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549926] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549938] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549958] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.549971] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780fa0 is same with the state(5) to be set 00:16:41.756 [2024-05-13 02:57:32.550792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.550832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.550862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.550878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.550895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.550909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.550925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.550939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.550954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.550967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.550990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.756 [2024-05-13 02:57:32.551480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.756 [2024-05-13 02:57:32.551495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.551969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.551982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.757 [2024-05-13 02:57:32.552375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:41.757 [2024-05-13 02:57:32.552534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.757 [2024-05-13 02:57:32.552654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.757 [2024-05-13 02:57:32.552742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.757 [2024-05-13 02:57:32.552757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.758 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.758 [2024-05-13 02:57:32.552771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.758 [2024-05-13 02:57:32.552787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.758 [2024-05-13 02:57:32.552801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.758 [2024-05-13 02:57:32.552816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.758 [2024-05-13 02:57:32.552830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.758 [2024-05-13 02:57:32.552845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.758 [2024-05-13 02:57:32.552859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.758 [2024-05-13 02:57:32.552873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2818a30 is same with the state(5) to be set 00:16:41.758 [2024-05-13 02:57:32.552948] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2818a30 was disconnected and freed. reset controller. 00:16:41.758 [2024-05-13 02:57:32.554125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:42.018 task offset: 40960 on job bdev=Nvme0n1 fails 00:16:42.018 00:16:42.018 Latency(us) 00:16:42.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.018 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:42.018 Job: Nvme0n1 ended in about 0.39 seconds with error 00:16:42.018 Verification LBA range: start 0x0 length 0x400 00:16:42.018 Nvme0n1 : 0.39 810.20 50.64 162.04 0.00 64108.72 13689.74 49127.73 00:16:42.018 =================================================================================================================== 00:16:42.018 Total : 810.20 50.64 162.04 0.00 64108.72 13689.74 49127.73 00:16:42.018 [2024-05-13 02:57:32.556339] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:42.018 [2024-05-13 02:57:32.556372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e7460 (9): Bad file descriptor 00:16:42.018 02:57:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.018 02:57:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:42.018 [2024-05-13 02:57:32.609029] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 329143 00:16:42.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (329143) - No such process 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:42.954 { 00:16:42.954 "params": { 00:16:42.954 "name": "Nvme$subsystem", 00:16:42.954 "trtype": "$TEST_TRANSPORT", 00:16:42.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.954 "adrfam": "ipv4", 00:16:42.954 "trsvcid": "$NVMF_PORT", 00:16:42.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.954 "hdgst": ${hdgst:-false}, 00:16:42.954 "ddgst": ${ddgst:-false} 00:16:42.954 }, 00:16:42.954 "method": "bdev_nvme_attach_controller" 00:16:42.954 } 00:16:42.954 EOF 00:16:42.954 )") 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:42.954 02:57:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:42.954 "params": { 00:16:42.954 "name": "Nvme0", 00:16:42.954 "trtype": "tcp", 00:16:42.954 "traddr": "10.0.0.2", 00:16:42.954 "adrfam": "ipv4", 00:16:42.954 "trsvcid": "4420", 00:16:42.954 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.954 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:42.954 "hdgst": false, 00:16:42.954 "ddgst": false 00:16:42.954 }, 00:16:42.954 "method": "bdev_nvme_attach_controller" 00:16:42.954 }' 00:16:42.954 [2024-05-13 02:57:33.607362] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:16:42.954 [2024-05-13 02:57:33.607460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329414 ] 00:16:42.954 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.954 [2024-05-13 02:57:33.640428] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:42.954 [2024-05-13 02:57:33.669855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.214 [2024-05-13 02:57:33.758988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.472 Running I/O for 1 seconds... 00:16:44.407 00:16:44.407 Latency(us) 00:16:44.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.407 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.407 Verification LBA range: start 0x0 length 0x400 00:16:44.407 Nvme0n1 : 1.04 985.65 61.60 0.00 0.00 64080.73 16408.27 49516.09 00:16:44.407 =================================================================================================================== 00:16:44.407 Total : 985.65 61.60 0.00 0.00 64080.73 16408.27 49516.09 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.666 rmmod nvme_tcp 00:16:44.666 rmmod nvme_fabrics 00:16:44.666 rmmod nvme_keyring 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 329090 ']' 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 329090 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 329090 ']' 00:16:44.666 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 329090 00:16:44.667 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:44.667 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:44.667 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 329090 00:16:44.667 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:44.667 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:44.667 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 329090' 00:16:44.667 killing process with pid 329090 00:16:44.667 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 329090 00:16:44.667 [2024-05-13 02:57:35.462013] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:44.667 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 329090 00:16:44.925 [2024-05-13 02:57:35.690462] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:44.925 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.925 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.925 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.925 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.925 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.925 02:57:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.925 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.925 02:57:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.466 02:57:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:47.466 02:57:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:47.466 00:16:47.466 real 0m8.547s 00:16:47.466 user 0m18.854s 00:16:47.466 sys 0m2.641s 00:16:47.466 02:57:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:47.466 02:57:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:47.466 ************************************ 00:16:47.466 END TEST nvmf_host_management 00:16:47.466 ************************************ 00:16:47.466 02:57:37 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:47.466 02:57:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:47.466 02:57:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:47.466 02:57:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:47.466 ************************************ 00:16:47.466 START TEST nvmf_lvol 00:16:47.466 ************************************ 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:47.466 * Looking for test storage... 00:16:47.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.466 02:57:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:47.467 02:57:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:49.370 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:49.370 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.370 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:49.371 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:49.371 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:49.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:16:49.371 00:16:49.371 --- 10.0.0.2 ping statistics --- 00:16:49.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.371 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:16:49.371 00:16:49.371 --- 10.0.0.1 ping statistics --- 00:16:49.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.371 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=331560 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 331560 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 331560 ']' 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:49.371 02:57:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.371 [2024-05-13 02:57:40.003927] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:16:49.371 [2024-05-13 02:57:40.004037] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.371 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.371 [2024-05-13 02:57:40.046008] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:49.371 [2024-05-13 02:57:40.078341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.371 [2024-05-13 02:57:40.170191] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.371 [2024-05-13 02:57:40.170252] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.371 [2024-05-13 02:57:40.170277] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.371 [2024-05-13 02:57:40.170290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.371 [2024-05-13 02:57:40.170302] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.371 [2024-05-13 02:57:40.170368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.371 [2024-05-13 02:57:40.170436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.371 [2024-05-13 02:57:40.170438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.630 02:57:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:49.630 02:57:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:49.630 02:57:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.630 02:57:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.630 02:57:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.630 02:57:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.630 02:57:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:49.888 [2024-05-13 02:57:40.545728] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.888 02:57:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.147 02:57:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:50.147 02:57:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.405 02:57:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:50.405 02:57:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:50.663 02:57:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:50.921 02:57:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=055f847c-a7ac-4d60-af1d-9eae6ff9aa68 00:16:50.921 02:57:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 055f847c-a7ac-4d60-af1d-9eae6ff9aa68 lvol 20 00:16:51.179 02:57:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6c790cd9-3098-42be-9fee-243a33df3ce7 00:16:51.179 02:57:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:51.436 02:57:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6c790cd9-3098-42be-9fee-243a33df3ce7 00:16:51.694 02:57:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:51.952 [2024-05-13 02:57:42.562902] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:51.952 [2024-05-13 02:57:42.563229] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.952 02:57:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.210 02:57:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=331917 00:16:52.210 02:57:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:52.210 02:57:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:52.210 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.143 02:57:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6c790cd9-3098-42be-9fee-243a33df3ce7 MY_SNAPSHOT 00:16:53.401 02:57:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=00f79132-fee4-43e6-acc8-8e215e38c496 00:16:53.401 02:57:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6c790cd9-3098-42be-9fee-243a33df3ce7 30 00:16:53.658 02:57:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 00f79132-fee4-43e6-acc8-8e215e38c496 MY_CLONE 00:16:53.916 02:57:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=42522e96-ac5e-439c-b2e4-108372662809 00:16:53.916 02:57:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 42522e96-ac5e-439c-b2e4-108372662809 00:16:54.481 02:57:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 331917 00:17:02.657 Initializing NVMe Controllers 00:17:02.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:02.657 Controller IO queue size 128, less than required. 00:17:02.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:02.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:02.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:02.657 Initialization complete. Launching workers. 00:17:02.657 ======================================================== 00:17:02.657 Latency(us) 00:17:02.657 Device Information : IOPS MiB/s Average min max 00:17:02.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10296.30 40.22 12439.70 1833.93 92802.84 00:17:02.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11205.90 43.77 11427.15 2300.45 71256.21 00:17:02.657 ======================================================== 00:17:02.657 Total : 21502.20 83.99 11912.01 1833.93 92802.84 00:17:02.657 00:17:02.657 02:57:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:02.915 02:57:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6c790cd9-3098-42be-9fee-243a33df3ce7 00:17:02.915 02:57:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 055f847c-a7ac-4d60-af1d-9eae6ff9aa68 00:17:03.172 02:57:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:03.431 02:57:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:03.431 02:57:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:03.431 02:57:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.431 02:57:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:03.431 02:57:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.431 02:57:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:03.431 02:57:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.431 02:57:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.431 rmmod nvme_tcp 00:17:03.431 rmmod nvme_fabrics 00:17:03.431 rmmod nvme_keyring 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 331560 ']' 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 331560 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 331560 ']' 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 331560 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 331560 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 331560' 00:17:03.431 killing process with pid 331560 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 331560 00:17:03.431 [2024-05-13 02:57:54.052926] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:03.431 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 331560 00:17:03.688 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.688 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:03.688 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:03.688 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.688 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.688 02:57:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.688 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.688 02:57:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.587 02:57:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:05.587 00:17:05.587 real 0m18.553s 00:17:05.587 user 1m2.552s 00:17:05.587 sys 0m6.098s 00:17:05.587 02:57:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:05.587 02:57:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:05.587 ************************************ 00:17:05.588 END TEST nvmf_lvol 00:17:05.588 ************************************ 00:17:05.847 02:57:56 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:05.847 02:57:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:05.847 02:57:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:05.847 02:57:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:05.847 ************************************ 00:17:05.847 START TEST nvmf_lvs_grow 00:17:05.847 ************************************ 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:05.847 * Looking for test storage... 00:17:05.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:05.847 02:57:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:07.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:07.748 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.007 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.007 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:17:08.008 00:17:08.008 --- 10.0.0.2 ping statistics --- 00:17:08.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.008 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:17:08.008 00:17:08.008 --- 10.0.0.1 ping statistics --- 00:17:08.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.008 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=335166 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 335166 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 335166 ']' 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:08.008 02:57:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.008 [2024-05-13 02:57:58.753655] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:08.008 [2024-05-13 02:57:58.753749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.008 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.008 [2024-05-13 02:57:58.792371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:08.267 [2024-05-13 02:57:58.823444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.267 [2024-05-13 02:57:58.915724] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.267 [2024-05-13 02:57:58.915784] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.267 [2024-05-13 02:57:58.915801] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.267 [2024-05-13 02:57:58.915813] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.267 [2024-05-13 02:57:58.915825] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.267 [2024-05-13 02:57:58.915864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.267 02:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:08.267 02:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:08.267 02:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.267 02:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.267 02:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.267 02:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.267 02:57:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:08.525 [2024-05-13 02:57:59.288961] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.525 02:57:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:08.525 02:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:08.525 02:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:08.525 02:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.783 ************************************ 00:17:08.783 START TEST lvs_grow_clean 00:17:08.783 ************************************ 00:17:08.783 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:08.783 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:08.783 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:08.783 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:08.784 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:08.784 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:08.784 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:08.784 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.784 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.784 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.042 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:09.043 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:09.301 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:09.301 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:09.301 02:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:09.559 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:09.559 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:09.559 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 lvol 150 00:17:09.817 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=712ce2e8-7bdc-4744-88ea-66785656ccab 00:17:09.817 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.817 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:10.075 [2024-05-13 02:58:00.657072] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:10.075 [2024-05-13 02:58:00.657147] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:10.075 true 00:17:10.075 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:10.075 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:10.333 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:10.333 02:58:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:10.591 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 712ce2e8-7bdc-4744-88ea-66785656ccab 00:17:10.849 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:11.108 [2024-05-13 02:58:01.667935] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:11.108 [2024-05-13 02:58:01.668257] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.108 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=335606 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 335606 /var/tmp/bdevperf.sock 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 335606 ']' 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:11.366 02:58:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:11.366 [2024-05-13 02:58:01.975760] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:11.366 [2024-05-13 02:58:01.975829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335606 ] 00:17:11.366 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.366 [2024-05-13 02:58:02.007535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:11.366 [2024-05-13 02:58:02.037217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.366 [2024-05-13 02:58:02.130265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.625 02:58:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:11.625 02:58:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:11.625 02:58:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:11.883 Nvme0n1 00:17:11.883 02:58:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:12.140 [ 00:17:12.140 { 00:17:12.140 "name": "Nvme0n1", 00:17:12.140 "aliases": [ 00:17:12.140 "712ce2e8-7bdc-4744-88ea-66785656ccab" 00:17:12.140 ], 00:17:12.140 "product_name": "NVMe disk", 00:17:12.140 "block_size": 4096, 00:17:12.140 "num_blocks": 38912, 00:17:12.140 "uuid": "712ce2e8-7bdc-4744-88ea-66785656ccab", 00:17:12.140 "assigned_rate_limits": { 00:17:12.140 "rw_ios_per_sec": 0, 00:17:12.140 "rw_mbytes_per_sec": 0, 00:17:12.140 "r_mbytes_per_sec": 0, 00:17:12.140 "w_mbytes_per_sec": 0 00:17:12.140 }, 00:17:12.140 "claimed": false, 00:17:12.140 "zoned": false, 00:17:12.140 "supported_io_types": { 00:17:12.141 "read": true, 00:17:12.141 "write": true, 00:17:12.141 "unmap": true, 00:17:12.141 "write_zeroes": true, 00:17:12.141 "flush": true, 00:17:12.141 "reset": true, 00:17:12.141 "compare": true, 00:17:12.141 "compare_and_write": true, 00:17:12.141 "abort": true, 00:17:12.141 "nvme_admin": true, 00:17:12.141 "nvme_io": true 00:17:12.141 }, 00:17:12.141 "memory_domains": [ 00:17:12.141 { 00:17:12.141 "dma_device_id": "system", 00:17:12.141 "dma_device_type": 1 00:17:12.141 } 00:17:12.141 ], 00:17:12.141 "driver_specific": { 00:17:12.141 "nvme": [ 00:17:12.141 { 00:17:12.141 "trid": { 00:17:12.141 "trtype": "TCP", 00:17:12.141 "adrfam": "IPv4", 00:17:12.141 "traddr": "10.0.0.2", 00:17:12.141 "trsvcid": "4420", 00:17:12.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:12.141 }, 00:17:12.141 "ctrlr_data": { 00:17:12.141 "cntlid": 1, 00:17:12.141 "vendor_id": "0x8086", 00:17:12.141 "model_number": "SPDK bdev Controller", 00:17:12.141 "serial_number": "SPDK0", 00:17:12.141 "firmware_revision": "24.05", 00:17:12.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:12.141 "oacs": { 00:17:12.141 "security": 0, 00:17:12.141 "format": 0, 00:17:12.141 "firmware": 0, 00:17:12.141 "ns_manage": 0 00:17:12.141 }, 00:17:12.141 "multi_ctrlr": true, 00:17:12.141 "ana_reporting": false 00:17:12.141 }, 00:17:12.141 "vs": { 00:17:12.141 "nvme_version": "1.3" 00:17:12.141 }, 00:17:12.141 "ns_data": { 00:17:12.141 "id": 1, 00:17:12.141 "can_share": true 00:17:12.141 } 00:17:12.141 } 00:17:12.141 ], 00:17:12.141 "mp_policy": "active_passive" 00:17:12.141 } 00:17:12.141 } 00:17:12.141 ] 00:17:12.141 02:58:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=335736 00:17:12.141 02:58:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:12.141 02:58:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.399 Running I/O for 10 seconds... 00:17:13.334 Latency(us) 00:17:13.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.334 Nvme0n1 : 1.00 14427.00 56.36 0.00 0.00 0.00 0.00 0.00 00:17:13.334 =================================================================================================================== 00:17:13.334 Total : 14427.00 56.36 0.00 0.00 0.00 0.00 0.00 00:17:13.334 00:17:14.268 02:58:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:14.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.268 Nvme0n1 : 2.00 14535.00 56.78 0.00 0.00 0.00 0.00 0.00 00:17:14.268 =================================================================================================================== 00:17:14.268 Total : 14535.00 56.78 0.00 0.00 0.00 0.00 0.00 00:17:14.268 00:17:14.526 true 00:17:14.526 02:58:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:14.526 02:58:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:14.785 02:58:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:14.785 02:58:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:14.785 02:58:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 335736 00:17:15.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.351 Nvme0n1 : 3.00 14639.00 57.18 0.00 0.00 0.00 0.00 0.00 00:17:15.351 =================================================================================================================== 00:17:15.351 Total : 14639.00 57.18 0.00 0.00 0.00 0.00 0.00 00:17:15.351 00:17:16.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.362 Nvme0n1 : 4.00 14742.75 57.59 0.00 0.00 0.00 0.00 0.00 00:17:16.362 =================================================================================================================== 00:17:16.362 Total : 14742.75 57.59 0.00 0.00 0.00 0.00 0.00 00:17:16.362 00:17:17.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.303 Nvme0n1 : 5.00 14824.80 57.91 0.00 0.00 0.00 0.00 0.00 00:17:17.303 =================================================================================================================== 00:17:17.303 Total : 14824.80 57.91 0.00 0.00 0.00 0.00 0.00 00:17:17.303 00:17:18.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.241 Nvme0n1 : 6.00 14913.83 58.26 0.00 0.00 0.00 0.00 0.00 00:17:18.241 =================================================================================================================== 00:17:18.241 Total : 14913.83 58.26 0.00 0.00 0.00 0.00 0.00 00:17:18.241 00:17:19.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.623 Nvme0n1 : 7.00 14950.00 58.40 0.00 0.00 0.00 0.00 0.00 00:17:19.623 =================================================================================================================== 00:17:19.623 Total : 14950.00 58.40 0.00 0.00 0.00 0.00 0.00 00:17:19.623 00:17:20.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.192 Nvme0n1 : 8.00 14963.38 58.45 0.00 0.00 0.00 0.00 0.00 00:17:20.192 =================================================================================================================== 00:17:20.192 Total : 14963.38 58.45 0.00 0.00 0.00 0.00 0.00 00:17:20.192 00:17:21.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.573 Nvme0n1 : 9.00 14986.11 58.54 0.00 0.00 0.00 0.00 0.00 00:17:21.573 =================================================================================================================== 00:17:21.573 Total : 14986.11 58.54 0.00 0.00 0.00 0.00 0.00 00:17:21.573 00:17:22.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.511 Nvme0n1 : 10.00 15002.60 58.60 0.00 0.00 0.00 0.00 0.00 00:17:22.511 =================================================================================================================== 00:17:22.511 Total : 15002.60 58.60 0.00 0.00 0.00 0.00 0.00 00:17:22.511 00:17:22.511 00:17:22.511 Latency(us) 00:17:22.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.511 Nvme0n1 : 10.01 15006.59 58.62 0.00 0.00 8523.50 5437.06 16214.09 00:17:22.511 =================================================================================================================== 00:17:22.511 Total : 15006.59 58.62 0.00 0.00 8523.50 5437.06 16214.09 00:17:22.511 0 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 335606 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 335606 ']' 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 335606 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 335606 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 335606' 00:17:22.511 killing process with pid 335606 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 335606 00:17:22.511 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.511 00:17:22.511 Latency(us) 00:17:22.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.511 =================================================================================================================== 00:17:22.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 335606 00:17:22.511 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:22.770 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:23.340 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:23.340 02:58:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:23.340 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:23.340 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:23.340 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:23.600 [2024-05-13 02:58:14.352478] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:23.600 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:23.860 request: 00:17:23.860 { 00:17:23.860 "uuid": "7eb72c01-1660-4280-a44e-9b750bfb10a9", 00:17:23.860 "method": "bdev_lvol_get_lvstores", 00:17:23.860 "req_id": 1 00:17:23.860 } 00:17:23.860 Got JSON-RPC error response 00:17:23.860 response: 00:17:23.860 { 00:17:23.860 "code": -19, 00:17:23.860 "message": "No such device" 00:17:23.860 } 00:17:23.860 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:23.860 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:23.860 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:23.860 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:23.860 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:24.119 aio_bdev 00:17:24.119 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 712ce2e8-7bdc-4744-88ea-66785656ccab 00:17:24.119 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=712ce2e8-7bdc-4744-88ea-66785656ccab 00:17:24.119 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:24.119 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:24.119 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:24.119 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:24.119 02:58:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:24.377 02:58:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 712ce2e8-7bdc-4744-88ea-66785656ccab -t 2000 00:17:24.944 [ 00:17:24.944 { 00:17:24.944 "name": "712ce2e8-7bdc-4744-88ea-66785656ccab", 00:17:24.944 "aliases": [ 00:17:24.944 "lvs/lvol" 00:17:24.944 ], 00:17:24.944 "product_name": "Logical Volume", 00:17:24.944 "block_size": 4096, 00:17:24.944 "num_blocks": 38912, 00:17:24.944 "uuid": "712ce2e8-7bdc-4744-88ea-66785656ccab", 00:17:24.944 "assigned_rate_limits": { 00:17:24.944 "rw_ios_per_sec": 0, 00:17:24.944 "rw_mbytes_per_sec": 0, 00:17:24.944 "r_mbytes_per_sec": 0, 00:17:24.944 "w_mbytes_per_sec": 0 00:17:24.944 }, 00:17:24.944 "claimed": false, 00:17:24.944 "zoned": false, 00:17:24.944 "supported_io_types": { 00:17:24.944 "read": true, 00:17:24.944 "write": true, 00:17:24.944 "unmap": true, 00:17:24.944 "write_zeroes": true, 00:17:24.944 "flush": false, 00:17:24.944 "reset": true, 00:17:24.944 "compare": false, 00:17:24.944 "compare_and_write": false, 00:17:24.944 "abort": false, 00:17:24.944 "nvme_admin": false, 00:17:24.944 "nvme_io": false 00:17:24.944 }, 00:17:24.944 "driver_specific": { 00:17:24.944 "lvol": { 00:17:24.944 "lvol_store_uuid": "7eb72c01-1660-4280-a44e-9b750bfb10a9", 00:17:24.944 "base_bdev": "aio_bdev", 00:17:24.944 "thin_provision": false, 00:17:24.944 "snapshot": false, 00:17:24.944 "clone": false, 00:17:24.944 "esnap_clone": false 00:17:24.944 } 00:17:24.944 } 00:17:24.944 } 00:17:24.944 ] 00:17:24.944 02:58:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:24.944 02:58:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:24.944 02:58:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:24.944 02:58:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:24.944 02:58:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:24.944 02:58:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:25.203 02:58:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:25.203 02:58:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 712ce2e8-7bdc-4744-88ea-66785656ccab 00:17:25.461 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7eb72c01-1660-4280-a44e-9b750bfb10a9 00:17:25.718 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:25.976 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.233 00:17:26.233 real 0m17.458s 00:17:26.233 user 0m17.001s 00:17:26.233 sys 0m1.817s 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:26.233 ************************************ 00:17:26.233 END TEST lvs_grow_clean 00:17:26.233 ************************************ 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:26.233 ************************************ 00:17:26.233 START TEST lvs_grow_dirty 00:17:26.233 ************************************ 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.233 02:58:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:26.491 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:26.491 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:26.750 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:26.750 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:26.750 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:27.010 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:27.010 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:27.010 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 370ad65a-c412-423d-b1e9-85a19aa8479b lvol 150 00:17:27.270 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=555e5b6b-c3a7-431a-bd5e-f98c3c73e215 00:17:27.270 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:27.270 02:58:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:27.528 [2024-05-13 02:58:18.156951] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:27.528 [2024-05-13 02:58:18.157054] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:27.528 true 00:17:27.528 02:58:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:27.528 02:58:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:27.786 02:58:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:27.786 02:58:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:28.046 02:58:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 555e5b6b-c3a7-431a-bd5e-f98c3c73e215 00:17:28.306 02:58:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:28.566 [2024-05-13 02:58:19.147952] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.566 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:28.825 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=337652 00:17:28.825 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:28.825 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:28.825 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 337652 /var/tmp/bdevperf.sock 00:17:28.825 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 337652 ']' 00:17:28.825 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.825 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:28.826 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.826 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:28.826 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:28.826 [2024-05-13 02:58:19.452019] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:28.826 [2024-05-13 02:58:19.452106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337652 ] 00:17:28.826 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.826 [2024-05-13 02:58:19.485010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:28.826 [2024-05-13 02:58:19.514990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.826 [2024-05-13 02:58:19.613395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.084 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:29.084 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:29.084 02:58:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:29.652 Nvme0n1 00:17:29.652 02:58:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:29.652 [ 00:17:29.652 { 00:17:29.652 "name": "Nvme0n1", 00:17:29.652 "aliases": [ 00:17:29.652 "555e5b6b-c3a7-431a-bd5e-f98c3c73e215" 00:17:29.652 ], 00:17:29.652 "product_name": "NVMe disk", 00:17:29.652 "block_size": 4096, 00:17:29.652 "num_blocks": 38912, 00:17:29.652 "uuid": "555e5b6b-c3a7-431a-bd5e-f98c3c73e215", 00:17:29.653 "assigned_rate_limits": { 00:17:29.653 "rw_ios_per_sec": 0, 00:17:29.653 "rw_mbytes_per_sec": 0, 00:17:29.653 "r_mbytes_per_sec": 0, 00:17:29.653 "w_mbytes_per_sec": 0 00:17:29.653 }, 00:17:29.653 "claimed": false, 00:17:29.653 "zoned": false, 00:17:29.653 "supported_io_types": { 00:17:29.653 "read": true, 00:17:29.653 "write": true, 00:17:29.653 "unmap": true, 00:17:29.653 "write_zeroes": true, 00:17:29.653 "flush": true, 00:17:29.653 "reset": true, 00:17:29.653 "compare": true, 00:17:29.653 "compare_and_write": true, 00:17:29.653 "abort": true, 00:17:29.653 "nvme_admin": true, 00:17:29.653 "nvme_io": true 00:17:29.653 }, 00:17:29.653 "memory_domains": [ 00:17:29.653 { 00:17:29.653 "dma_device_id": "system", 00:17:29.653 "dma_device_type": 1 00:17:29.653 } 00:17:29.653 ], 00:17:29.653 "driver_specific": { 00:17:29.653 "nvme": [ 00:17:29.653 { 00:17:29.653 "trid": { 00:17:29.653 "trtype": "TCP", 00:17:29.653 "adrfam": "IPv4", 00:17:29.653 "traddr": "10.0.0.2", 00:17:29.653 "trsvcid": "4420", 00:17:29.653 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:29.653 }, 00:17:29.653 "ctrlr_data": { 00:17:29.653 "cntlid": 1, 00:17:29.653 "vendor_id": "0x8086", 00:17:29.653 "model_number": "SPDK bdev Controller", 00:17:29.653 "serial_number": "SPDK0", 00:17:29.653 "firmware_revision": "24.05", 00:17:29.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:29.653 "oacs": { 00:17:29.653 "security": 0, 00:17:29.653 "format": 0, 00:17:29.653 "firmware": 0, 00:17:29.653 "ns_manage": 0 00:17:29.653 }, 00:17:29.653 "multi_ctrlr": true, 00:17:29.653 "ana_reporting": false 00:17:29.653 }, 00:17:29.653 "vs": { 00:17:29.653 "nvme_version": "1.3" 00:17:29.653 }, 00:17:29.653 "ns_data": { 00:17:29.653 "id": 1, 00:17:29.653 "can_share": true 00:17:29.653 } 00:17:29.653 } 00:17:29.653 ], 00:17:29.653 "mp_policy": "active_passive" 00:17:29.653 } 00:17:29.653 } 00:17:29.653 ] 00:17:29.653 02:58:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=337790 00:17:29.653 02:58:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:29.653 02:58:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:29.911 Running I/O for 10 seconds... 00:17:30.851 Latency(us) 00:17:30.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.851 Nvme0n1 : 1.00 14799.00 57.81 0.00 0.00 0.00 0.00 0.00 00:17:30.851 =================================================================================================================== 00:17:30.851 Total : 14799.00 57.81 0.00 0.00 0.00 0.00 0.00 00:17:30.851 00:17:31.788 02:58:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:31.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.788 Nvme0n1 : 2.00 14849.50 58.01 0.00 0.00 0.00 0.00 0.00 00:17:31.788 =================================================================================================================== 00:17:31.788 Total : 14849.50 58.01 0.00 0.00 0.00 0.00 0.00 00:17:31.788 00:17:32.089 true 00:17:32.089 02:58:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:32.089 02:58:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:32.347 02:58:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:32.347 02:58:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:32.347 02:58:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 337790 00:17:32.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.917 Nvme0n1 : 3.00 14874.33 58.10 0.00 0.00 0.00 0.00 0.00 00:17:32.917 =================================================================================================================== 00:17:32.917 Total : 14874.33 58.10 0.00 0.00 0.00 0.00 0.00 00:17:32.917 00:17:33.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.851 Nvme0n1 : 4.00 14912.75 58.25 0.00 0.00 0.00 0.00 0.00 00:17:33.851 =================================================================================================================== 00:17:33.851 Total : 14912.75 58.25 0.00 0.00 0.00 0.00 0.00 00:17:33.851 00:17:34.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.786 Nvme0n1 : 5.00 14951.00 58.40 0.00 0.00 0.00 0.00 0.00 00:17:34.786 =================================================================================================================== 00:17:34.786 Total : 14951.00 58.40 0.00 0.00 0.00 0.00 0.00 00:17:34.786 00:17:36.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.167 Nvme0n1 : 6.00 15072.50 58.88 0.00 0.00 0.00 0.00 0.00 00:17:36.167 =================================================================================================================== 00:17:36.167 Total : 15072.50 58.88 0.00 0.00 0.00 0.00 0.00 00:17:36.167 00:17:37.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.103 Nvme0n1 : 7.00 15086.14 58.93 0.00 0.00 0.00 0.00 0.00 00:17:37.103 =================================================================================================================== 00:17:37.103 Total : 15086.14 58.93 0.00 0.00 0.00 0.00 0.00 00:17:37.103 00:17:38.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.041 Nvme0n1 : 8.00 15104.38 59.00 0.00 0.00 0.00 0.00 0.00 00:17:38.041 =================================================================================================================== 00:17:38.041 Total : 15104.38 59.00 0.00 0.00 0.00 0.00 0.00 00:17:38.041 00:17:38.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.981 Nvme0n1 : 9.00 15175.44 59.28 0.00 0.00 0.00 0.00 0.00 00:17:38.981 =================================================================================================================== 00:17:38.981 Total : 15175.44 59.28 0.00 0.00 0.00 0.00 0.00 00:17:38.981 00:17:39.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.921 Nvme0n1 : 10.00 15187.50 59.33 0.00 0.00 0.00 0.00 0.00 00:17:39.921 =================================================================================================================== 00:17:39.921 Total : 15187.50 59.33 0.00 0.00 0.00 0.00 0.00 00:17:39.921 00:17:39.921 00:17:39.921 Latency(us) 00:17:39.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.921 Nvme0n1 : 10.01 15189.00 59.33 0.00 0.00 8420.92 5728.33 19515.16 00:17:39.921 =================================================================================================================== 00:17:39.921 Total : 15189.00 59.33 0.00 0.00 8420.92 5728.33 19515.16 00:17:39.921 0 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 337652 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 337652 ']' 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 337652 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 337652 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 337652' 00:17:39.921 killing process with pid 337652 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 337652 00:17:39.921 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.921 00:17:39.921 Latency(us) 00:17:39.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.921 =================================================================================================================== 00:17:39.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.921 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 337652 00:17:40.180 02:58:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:40.438 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:40.697 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:40.697 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 335166 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 335166 00:17:40.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 335166 Killed "${NVMF_APP[@]}" "$@" 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:40.956 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=339116 00:17:40.957 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:40.957 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 339116 00:17:40.957 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 339116 ']' 00:17:40.957 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.957 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:40.957 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.957 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:40.957 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:40.957 [2024-05-13 02:58:31.709072] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:40.957 [2024-05-13 02:58:31.709148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.957 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.957 [2024-05-13 02:58:31.750558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:41.215 [2024-05-13 02:58:31.777588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.215 [2024-05-13 02:58:31.865368] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.215 [2024-05-13 02:58:31.865423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.215 [2024-05-13 02:58:31.865452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.216 [2024-05-13 02:58:31.865463] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.216 [2024-05-13 02:58:31.865473] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.216 [2024-05-13 02:58:31.865499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.216 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:41.216 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:41.216 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.216 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.216 02:58:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:41.216 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.216 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:41.474 [2024-05-13 02:58:32.233433] blobstore.c:4789:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:41.474 [2024-05-13 02:58:32.233568] blobstore.c:4736:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:41.474 [2024-05-13 02:58:32.233617] blobstore.c:4736:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:41.474 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:41.474 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 555e5b6b-c3a7-431a-bd5e-f98c3c73e215 00:17:41.474 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=555e5b6b-c3a7-431a-bd5e-f98c3c73e215 00:17:41.474 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:41.474 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:41.474 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:41.474 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:41.474 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:41.732 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 555e5b6b-c3a7-431a-bd5e-f98c3c73e215 -t 2000 00:17:41.992 [ 00:17:41.992 { 00:17:41.992 "name": "555e5b6b-c3a7-431a-bd5e-f98c3c73e215", 00:17:41.992 "aliases": [ 00:17:41.992 "lvs/lvol" 00:17:41.992 ], 00:17:41.992 "product_name": "Logical Volume", 00:17:41.992 "block_size": 4096, 00:17:41.992 "num_blocks": 38912, 00:17:41.992 "uuid": "555e5b6b-c3a7-431a-bd5e-f98c3c73e215", 00:17:41.992 "assigned_rate_limits": { 00:17:41.992 "rw_ios_per_sec": 0, 00:17:41.992 "rw_mbytes_per_sec": 0, 00:17:41.992 "r_mbytes_per_sec": 0, 00:17:41.992 "w_mbytes_per_sec": 0 00:17:41.992 }, 00:17:41.992 "claimed": false, 00:17:41.992 "zoned": false, 00:17:41.992 "supported_io_types": { 00:17:41.992 "read": true, 00:17:41.992 "write": true, 00:17:41.992 "unmap": true, 00:17:41.992 "write_zeroes": true, 00:17:41.992 "flush": false, 00:17:41.992 "reset": true, 00:17:41.992 "compare": false, 00:17:41.992 "compare_and_write": false, 00:17:41.992 "abort": false, 00:17:41.992 "nvme_admin": false, 00:17:41.992 "nvme_io": false 00:17:41.992 }, 00:17:41.992 "driver_specific": { 00:17:41.992 "lvol": { 00:17:41.992 "lvol_store_uuid": "370ad65a-c412-423d-b1e9-85a19aa8479b", 00:17:41.992 "base_bdev": "aio_bdev", 00:17:41.992 "thin_provision": false, 00:17:41.992 "snapshot": false, 00:17:41.992 "clone": false, 00:17:41.992 "esnap_clone": false 00:17:41.992 } 00:17:41.992 } 00:17:41.992 } 00:17:41.992 ] 00:17:41.992 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:41.992 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:41.992 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:42.252 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:42.252 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:42.252 02:58:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:42.510 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:42.510 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:42.771 [2024-05-13 02:58:33.474587] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:42.771 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:43.030 request: 00:17:43.030 { 00:17:43.030 "uuid": "370ad65a-c412-423d-b1e9-85a19aa8479b", 00:17:43.030 "method": "bdev_lvol_get_lvstores", 00:17:43.030 "req_id": 1 00:17:43.030 } 00:17:43.030 Got JSON-RPC error response 00:17:43.030 response: 00:17:43.030 { 00:17:43.030 "code": -19, 00:17:43.030 "message": "No such device" 00:17:43.030 } 00:17:43.030 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:43.030 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:43.030 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:43.030 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:43.030 02:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:43.287 aio_bdev 00:17:43.287 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 555e5b6b-c3a7-431a-bd5e-f98c3c73e215 00:17:43.287 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=555e5b6b-c3a7-431a-bd5e-f98c3c73e215 00:17:43.287 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:43.287 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:43.287 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:43.287 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:43.287 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:43.544 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 555e5b6b-c3a7-431a-bd5e-f98c3c73e215 -t 2000 00:17:43.803 [ 00:17:43.803 { 00:17:43.803 "name": "555e5b6b-c3a7-431a-bd5e-f98c3c73e215", 00:17:43.803 "aliases": [ 00:17:43.803 "lvs/lvol" 00:17:43.803 ], 00:17:43.803 "product_name": "Logical Volume", 00:17:43.803 "block_size": 4096, 00:17:43.803 "num_blocks": 38912, 00:17:43.803 "uuid": "555e5b6b-c3a7-431a-bd5e-f98c3c73e215", 00:17:43.803 "assigned_rate_limits": { 00:17:43.803 "rw_ios_per_sec": 0, 00:17:43.803 "rw_mbytes_per_sec": 0, 00:17:43.803 "r_mbytes_per_sec": 0, 00:17:43.803 "w_mbytes_per_sec": 0 00:17:43.803 }, 00:17:43.803 "claimed": false, 00:17:43.803 "zoned": false, 00:17:43.803 "supported_io_types": { 00:17:43.803 "read": true, 00:17:43.803 "write": true, 00:17:43.803 "unmap": true, 00:17:43.803 "write_zeroes": true, 00:17:43.803 "flush": false, 00:17:43.803 "reset": true, 00:17:43.803 "compare": false, 00:17:43.803 "compare_and_write": false, 00:17:43.803 "abort": false, 00:17:43.803 "nvme_admin": false, 00:17:43.803 "nvme_io": false 00:17:43.803 }, 00:17:43.803 "driver_specific": { 00:17:43.803 "lvol": { 00:17:43.803 "lvol_store_uuid": "370ad65a-c412-423d-b1e9-85a19aa8479b", 00:17:43.803 "base_bdev": "aio_bdev", 00:17:43.803 "thin_provision": false, 00:17:43.803 "snapshot": false, 00:17:43.803 "clone": false, 00:17:43.803 "esnap_clone": false 00:17:43.803 } 00:17:43.803 } 00:17:43.803 } 00:17:43.803 ] 00:17:43.803 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:43.803 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:43.803 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:44.061 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:44.061 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:44.061 02:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:44.319 02:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:44.319 02:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 555e5b6b-c3a7-431a-bd5e-f98c3c73e215 00:17:44.577 02:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 370ad65a-c412-423d-b1e9-85a19aa8479b 00:17:44.836 02:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:45.095 02:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:45.095 00:17:45.095 real 0m19.049s 00:17:45.095 user 0m48.152s 00:17:45.095 sys 0m4.768s 00:17:45.095 02:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:45.095 02:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:45.095 ************************************ 00:17:45.096 END TEST lvs_grow_dirty 00:17:45.096 ************************************ 00:17:45.356 02:58:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:45.356 02:58:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:45.356 02:58:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:45.356 02:58:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:45.357 nvmf_trace.0 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.357 02:58:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.357 rmmod nvme_tcp 00:17:45.357 rmmod nvme_fabrics 00:17:45.357 rmmod nvme_keyring 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 339116 ']' 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 339116 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 339116 ']' 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 339116 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 339116 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 339116' 00:17:45.357 killing process with pid 339116 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 339116 00:17:45.357 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 339116 00:17:45.616 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.616 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.616 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.616 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.616 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.616 02:58:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.616 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.616 02:58:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.525 02:58:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:47.830 00:17:47.830 real 0m41.901s 00:17:47.830 user 1m10.882s 00:17:47.830 sys 0m8.532s 00:17:47.830 02:58:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:47.830 02:58:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:47.830 ************************************ 00:17:47.830 END TEST nvmf_lvs_grow 00:17:47.830 ************************************ 00:17:47.830 02:58:38 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:47.830 02:58:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:47.830 02:58:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:47.830 02:58:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:47.830 ************************************ 00:17:47.830 START TEST nvmf_bdev_io_wait 00:17:47.830 ************************************ 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:47.830 * Looking for test storage... 00:17:47.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:47.830 02:58:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:49.734 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.734 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:49.734 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:49.734 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:49.734 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:49.734 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:49.734 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:49.734 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:49.735 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:49.735 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:49.735 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:49.735 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:49.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:17:49.735 00:17:49.735 --- 10.0.0.2 ping statistics --- 00:17:49.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.735 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:17:49.735 00:17:49.735 --- 10.0.0.1 ping statistics --- 00:17:49.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.735 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=341638 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 341638 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 341638 ']' 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:49.735 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:49.735 [2024-05-13 02:58:40.474721] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:49.735 [2024-05-13 02:58:40.474802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.735 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.735 [2024-05-13 02:58:40.521441] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:49.995 [2024-05-13 02:58:40.551890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.995 [2024-05-13 02:58:40.647226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.995 [2024-05-13 02:58:40.647280] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.996 [2024-05-13 02:58:40.647296] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.996 [2024-05-13 02:58:40.647310] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.996 [2024-05-13 02:58:40.647322] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.996 [2024-05-13 02:58:40.647378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.996 [2024-05-13 02:58:40.647430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.996 [2024-05-13 02:58:40.647466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.996 [2024-05-13 02:58:40.647469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.996 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.255 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.255 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.255 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.255 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.255 [2024-05-13 02:58:40.829076] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.255 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.255 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:50.255 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.255 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.255 Malloc0 00:17:50.255 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.256 [2024-05-13 02:58:40.897037] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:50.256 [2024-05-13 02:58:40.897316] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=341663 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=341664 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=341666 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.256 { 00:17:50.256 "params": { 00:17:50.256 "name": "Nvme$subsystem", 00:17:50.256 "trtype": "$TEST_TRANSPORT", 00:17:50.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.256 "adrfam": "ipv4", 00:17:50.256 "trsvcid": "$NVMF_PORT", 00:17:50.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.256 "hdgst": ${hdgst:-false}, 00:17:50.256 "ddgst": ${ddgst:-false} 00:17:50.256 }, 00:17:50.256 "method": "bdev_nvme_attach_controller" 00:17:50.256 } 00:17:50.256 EOF 00:17:50.256 )") 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=341669 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.256 { 00:17:50.256 "params": { 00:17:50.256 "name": "Nvme$subsystem", 00:17:50.256 "trtype": "$TEST_TRANSPORT", 00:17:50.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.256 "adrfam": "ipv4", 00:17:50.256 "trsvcid": "$NVMF_PORT", 00:17:50.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.256 "hdgst": ${hdgst:-false}, 00:17:50.256 "ddgst": ${ddgst:-false} 00:17:50.256 }, 00:17:50.256 "method": "bdev_nvme_attach_controller" 00:17:50.256 } 00:17:50.256 EOF 00:17:50.256 )") 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.256 { 00:17:50.256 "params": { 00:17:50.256 "name": "Nvme$subsystem", 00:17:50.256 "trtype": "$TEST_TRANSPORT", 00:17:50.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.256 "adrfam": "ipv4", 00:17:50.256 "trsvcid": "$NVMF_PORT", 00:17:50.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.256 "hdgst": ${hdgst:-false}, 00:17:50.256 "ddgst": ${ddgst:-false} 00:17:50.256 }, 00:17:50.256 "method": "bdev_nvme_attach_controller" 00:17:50.256 } 00:17:50.256 EOF 00:17:50.256 )") 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.256 { 00:17:50.256 "params": { 00:17:50.256 "name": "Nvme$subsystem", 00:17:50.256 "trtype": "$TEST_TRANSPORT", 00:17:50.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.256 "adrfam": "ipv4", 00:17:50.256 "trsvcid": "$NVMF_PORT", 00:17:50.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.256 "hdgst": ${hdgst:-false}, 00:17:50.256 "ddgst": ${ddgst:-false} 00:17:50.256 }, 00:17:50.256 "method": "bdev_nvme_attach_controller" 00:17:50.256 } 00:17:50.256 EOF 00:17:50.256 )") 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 341663 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.256 "params": { 00:17:50.256 "name": "Nvme1", 00:17:50.256 "trtype": "tcp", 00:17:50.256 "traddr": "10.0.0.2", 00:17:50.256 "adrfam": "ipv4", 00:17:50.256 "trsvcid": "4420", 00:17:50.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.256 "hdgst": false, 00:17:50.256 "ddgst": false 00:17:50.256 }, 00:17:50.256 "method": "bdev_nvme_attach_controller" 00:17:50.256 }' 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.256 "params": { 00:17:50.256 "name": "Nvme1", 00:17:50.256 "trtype": "tcp", 00:17:50.256 "traddr": "10.0.0.2", 00:17:50.256 "adrfam": "ipv4", 00:17:50.256 "trsvcid": "4420", 00:17:50.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.256 "hdgst": false, 00:17:50.256 "ddgst": false 00:17:50.256 }, 00:17:50.256 "method": "bdev_nvme_attach_controller" 00:17:50.256 }' 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.256 "params": { 00:17:50.256 "name": "Nvme1", 00:17:50.256 "trtype": "tcp", 00:17:50.256 "traddr": "10.0.0.2", 00:17:50.256 "adrfam": "ipv4", 00:17:50.256 "trsvcid": "4420", 00:17:50.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.256 "hdgst": false, 00:17:50.256 "ddgst": false 00:17:50.256 }, 00:17:50.256 "method": "bdev_nvme_attach_controller" 00:17:50.256 }' 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:50.256 02:58:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.256 "params": { 00:17:50.256 "name": "Nvme1", 00:17:50.256 "trtype": "tcp", 00:17:50.256 "traddr": "10.0.0.2", 00:17:50.256 "adrfam": "ipv4", 00:17:50.256 "trsvcid": "4420", 00:17:50.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.256 "hdgst": false, 00:17:50.256 "ddgst": false 00:17:50.256 }, 00:17:50.256 "method": "bdev_nvme_attach_controller" 00:17:50.256 }' 00:17:50.257 [2024-05-13 02:58:40.944796] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:50.257 [2024-05-13 02:58:40.944867] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:50.257 [2024-05-13 02:58:40.945770] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:50.257 [2024-05-13 02:58:40.945768] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:50.257 [2024-05-13 02:58:40.945768] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:50.257 [2024-05-13 02:58:40.945853] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-13 02:58:40.945853] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-13 02:58:40.945854] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:50.257 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:50.257 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:50.257 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.516 [2024-05-13 02:58:41.089759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:50.516 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.516 [2024-05-13 02:58:41.118724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.516 [2024-05-13 02:58:41.195761] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:50.516 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.516 [2024-05-13 02:58:41.197781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:50.516 [2024-05-13 02:58:41.226115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.516 [2024-05-13 02:58:41.293971] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:50.516 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.516 [2024-05-13 02:58:41.301137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:50.775 [2024-05-13 02:58:41.324951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.775 [2024-05-13 02:58:41.364813] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:50.775 [2024-05-13 02:58:41.395194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.775 [2024-05-13 02:58:41.399244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:50.775 [2024-05-13 02:58:41.464083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:50.775 Running I/O for 1 seconds... 00:17:51.033 Running I/O for 1 seconds... 00:17:51.033 Running I/O for 1 seconds... 00:17:51.033 Running I/O for 1 seconds... 00:17:51.970 00:17:51.970 Latency(us) 00:17:51.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.970 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:51.970 Nvme1n1 : 1.02 5819.63 22.73 0.00 0.00 21798.81 9077.95 31263.10 00:17:51.970 =================================================================================================================== 00:17:51.970 Total : 5819.63 22.73 0.00 0.00 21798.81 9077.95 31263.10 00:17:51.970 00:17:51.970 Latency(us) 00:17:51.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.970 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:51.970 Nvme1n1 : 1.01 10308.13 40.27 0.00 0.00 12332.17 3592.34 23204.60 00:17:51.970 =================================================================================================================== 00:17:51.970 Total : 10308.13 40.27 0.00 0.00 12332.17 3592.34 23204.60 00:17:51.970 00:17:51.970 Latency(us) 00:17:51.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.970 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:51.970 Nvme1n1 : 1.01 6802.86 26.57 0.00 0.00 18735.12 4077.80 37088.52 00:17:51.970 =================================================================================================================== 00:17:51.970 Total : 6802.86 26.57 0.00 0.00 18735.12 4077.80 37088.52 00:17:52.230 00:17:52.230 Latency(us) 00:17:52.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.230 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:52.230 Nvme1n1 : 1.00 155932.70 609.11 0.00 0.00 817.75 268.52 1286.45 00:17:52.230 =================================================================================================================== 00:17:52.230 Total : 155932.70 609.11 0.00 0.00 817.75 268.52 1286.45 00:17:52.230 02:58:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 341664 00:17:52.230 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 341666 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 341669 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:52.491 rmmod nvme_tcp 00:17:52.491 rmmod nvme_fabrics 00:17:52.491 rmmod nvme_keyring 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 341638 ']' 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 341638 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 341638 ']' 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 341638 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 341638 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 341638' 00:17:52.491 killing process with pid 341638 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 341638 00:17:52.491 [2024-05-13 02:58:43.198279] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:52.491 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 341638 00:17:52.750 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:52.750 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:52.750 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:52.750 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:52.750 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:52.750 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.750 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.750 02:58:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.655 02:58:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:54.913 00:17:54.913 real 0m7.072s 00:17:54.913 user 0m16.828s 00:17:54.913 sys 0m3.377s 00:17:54.913 02:58:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:54.913 02:58:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.913 ************************************ 00:17:54.913 END TEST nvmf_bdev_io_wait 00:17:54.913 ************************************ 00:17:54.913 02:58:45 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:54.913 02:58:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:54.913 02:58:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:54.913 02:58:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:54.913 ************************************ 00:17:54.913 START TEST nvmf_queue_depth 00:17:54.913 ************************************ 00:17:54.913 02:58:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:54.913 * Looking for test storage... 00:17:54.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.913 02:58:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.913 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:54.913 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.913 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.913 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.913 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.913 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:54.914 02:58:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:56.821 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:56.821 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:56.821 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.821 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:56.821 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.822 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:17:57.082 00:17:57.082 --- 10.0.0.2 ping statistics --- 00:17:57.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.082 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:17:57.082 00:17:57.082 --- 10.0.0.1 ping statistics --- 00:17:57.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.082 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=343882 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 343882 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 343882 ']' 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:57.082 02:58:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.082 [2024-05-13 02:58:47.813533] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:57.082 [2024-05-13 02:58:47.813627] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.082 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.082 [2024-05-13 02:58:47.853036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:57.082 [2024-05-13 02:58:47.879552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.342 [2024-05-13 02:58:47.967074] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.342 [2024-05-13 02:58:47.967139] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.342 [2024-05-13 02:58:47.967168] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.342 [2024-05-13 02:58:47.967180] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.342 [2024-05-13 02:58:47.967198] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.342 [2024-05-13 02:58:47.967239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.342 [2024-05-13 02:58:48.107654] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.342 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.600 Malloc0 00:17:57.600 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.600 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.601 [2024-05-13 02:58:48.168166] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:57.601 [2024-05-13 02:58:48.168453] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=344016 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 344016 /var/tmp/bdevperf.sock 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 344016 ']' 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:57.601 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.601 [2024-05-13 02:58:48.211953] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:17:57.601 [2024-05-13 02:58:48.212040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344016 ] 00:17:57.601 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.601 [2024-05-13 02:58:48.244203] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:57.601 [2024-05-13 02:58:48.273979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.601 [2024-05-13 02:58:48.365590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.859 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:57.859 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:57.859 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.859 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.859 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.859 NVMe0n1 00:17:57.860 02:58:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.860 02:58:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.119 Running I/O for 10 seconds... 00:18:08.105 00:18:08.105 Latency(us) 00:18:08.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.105 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:08.105 Verification LBA range: start 0x0 length 0x4000 00:18:08.105 NVMe0n1 : 10.08 8739.74 34.14 0.00 0.00 116681.28 24855.13 75342.13 00:18:08.105 =================================================================================================================== 00:18:08.105 Total : 8739.74 34.14 0.00 0.00 116681.28 24855.13 75342.13 00:18:08.105 0 00:18:08.105 02:58:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 344016 00:18:08.105 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 344016 ']' 00:18:08.105 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 344016 00:18:08.105 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:08.105 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:08.105 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 344016 00:18:08.105 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:08.105 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:08.105 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 344016' 00:18:08.105 killing process with pid 344016 00:18:08.106 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 344016 00:18:08.106 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.106 00:18:08.106 Latency(us) 00:18:08.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.106 =================================================================================================================== 00:18:08.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.106 02:58:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 344016 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.365 rmmod nvme_tcp 00:18:08.365 rmmod nvme_fabrics 00:18:08.365 rmmod nvme_keyring 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 343882 ']' 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 343882 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 343882 ']' 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 343882 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 343882 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 343882' 00:18:08.365 killing process with pid 343882 00:18:08.365 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 343882 00:18:08.366 [2024-05-13 02:58:59.153295] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:08.366 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 343882 00:18:08.624 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:08.625 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:08.625 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:08.625 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.625 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:08.625 02:58:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.625 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.625 02:58:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.165 02:59:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:11.165 00:18:11.165 real 0m15.951s 00:18:11.165 user 0m22.291s 00:18:11.165 sys 0m3.103s 00:18:11.165 02:59:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:11.165 02:59:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:11.165 ************************************ 00:18:11.165 END TEST nvmf_queue_depth 00:18:11.165 ************************************ 00:18:11.165 02:59:01 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:11.165 02:59:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:11.165 02:59:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:11.165 02:59:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:11.165 ************************************ 00:18:11.165 START TEST nvmf_target_multipath 00:18:11.165 ************************************ 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:11.165 * Looking for test storage... 00:18:11.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.165 02:59:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:11.166 02:59:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:13.097 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:13.097 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:13.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:13.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:13.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:18:13.097 00:18:13.097 --- 10.0.0.2 ping statistics --- 00:18:13.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.097 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:18:13.097 00:18:13.097 --- 10.0.0.1 ping statistics --- 00:18:13.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.097 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:13.097 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:13.098 only one NIC for nvmf test 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.098 rmmod nvme_tcp 00:18:13.098 rmmod nvme_fabrics 00:18:13.098 rmmod nvme_keyring 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.098 02:59:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.003 00:18:15.003 real 0m4.155s 00:18:15.003 user 0m0.760s 00:18:15.003 sys 0m1.384s 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:15.003 02:59:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.003 ************************************ 00:18:15.003 END TEST nvmf_target_multipath 00:18:15.003 ************************************ 00:18:15.003 02:59:05 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:15.003 02:59:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:15.003 02:59:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:15.003 02:59:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:15.003 ************************************ 00:18:15.003 START TEST nvmf_zcopy 00:18:15.003 ************************************ 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:15.003 * Looking for test storage... 00:18:15.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.003 02:59:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:15.004 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:15.263 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.263 02:59:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.263 02:59:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.263 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:15.263 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:15.263 02:59:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.263 02:59:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:17.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:17.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:17.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:17.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:17.168 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:18:17.169 00:18:17.169 --- 10.0.0.2 ping statistics --- 00:18:17.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.169 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:18:17.169 00:18:17.169 --- 10.0.0.1 ping statistics --- 00:18:17.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.169 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=349071 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 349071 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 349071 ']' 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:17.169 02:59:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.169 [2024-05-13 02:59:07.916514] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:18:17.169 [2024-05-13 02:59:07.916605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.169 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.169 [2024-05-13 02:59:07.955176] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:17.427 [2024-05-13 02:59:07.981860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.427 [2024-05-13 02:59:08.066117] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.427 [2024-05-13 02:59:08.066168] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.427 [2024-05-13 02:59:08.066197] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.427 [2024-05-13 02:59:08.066214] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.427 [2024-05-13 02:59:08.066224] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.427 [2024-05-13 02:59:08.066249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.427 [2024-05-13 02:59:08.196806] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.427 [2024-05-13 02:59:08.212774] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:17.427 [2024-05-13 02:59:08.213028] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.427 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.684 malloc0 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:17.684 { 00:18:17.684 "params": { 00:18:17.684 "name": "Nvme$subsystem", 00:18:17.684 "trtype": "$TEST_TRANSPORT", 00:18:17.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.684 "adrfam": "ipv4", 00:18:17.684 "trsvcid": "$NVMF_PORT", 00:18:17.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.684 "hdgst": ${hdgst:-false}, 00:18:17.684 "ddgst": ${ddgst:-false} 00:18:17.684 }, 00:18:17.684 "method": "bdev_nvme_attach_controller" 00:18:17.684 } 00:18:17.684 EOF 00:18:17.684 )") 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:17.684 02:59:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:17.684 "params": { 00:18:17.684 "name": "Nvme1", 00:18:17.684 "trtype": "tcp", 00:18:17.684 "traddr": "10.0.0.2", 00:18:17.684 "adrfam": "ipv4", 00:18:17.684 "trsvcid": "4420", 00:18:17.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.684 "hdgst": false, 00:18:17.684 "ddgst": false 00:18:17.684 }, 00:18:17.684 "method": "bdev_nvme_attach_controller" 00:18:17.684 }' 00:18:17.684 [2024-05-13 02:59:08.285236] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:18:17.684 [2024-05-13 02:59:08.285317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349096 ] 00:18:17.684 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.684 [2024-05-13 02:59:08.316210] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:17.684 [2024-05-13 02:59:08.347913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.684 [2024-05-13 02:59:08.441346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.943 Running I/O for 10 seconds... 00:18:27.930 00:18:27.930 Latency(us) 00:18:27.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.930 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:27.930 Verification LBA range: start 0x0 length 0x1000 00:18:27.930 Nvme1n1 : 10.01 5809.45 45.39 0.00 0.00 21973.14 1808.31 42913.94 00:18:27.930 =================================================================================================================== 00:18:27.930 Total : 5809.45 45.39 0.00 0.00 21973.14 1808.31 42913.94 00:18:28.190 02:59:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=350400 00:18:28.190 02:59:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:28.190 02:59:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.190 02:59:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:28.190 02:59:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:28.190 02:59:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:28.190 02:59:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:28.190 02:59:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:28.190 02:59:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:28.190 { 00:18:28.190 "params": { 00:18:28.190 "name": "Nvme$subsystem", 00:18:28.191 "trtype": "$TEST_TRANSPORT", 00:18:28.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.191 "adrfam": "ipv4", 00:18:28.191 "trsvcid": "$NVMF_PORT", 00:18:28.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.191 "hdgst": ${hdgst:-false}, 00:18:28.191 "ddgst": ${ddgst:-false} 00:18:28.191 }, 00:18:28.191 "method": "bdev_nvme_attach_controller" 00:18:28.191 } 00:18:28.191 EOF 00:18:28.191 )") 00:18:28.191 02:59:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:28.191 [2024-05-13 02:59:18.892492] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.892535] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 02:59:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:28.191 02:59:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:28.191 02:59:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:28.191 "params": { 00:18:28.191 "name": "Nvme1", 00:18:28.191 "trtype": "tcp", 00:18:28.191 "traddr": "10.0.0.2", 00:18:28.191 "adrfam": "ipv4", 00:18:28.191 "trsvcid": "4420", 00:18:28.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.191 "hdgst": false, 00:18:28.191 "ddgst": false 00:18:28.191 }, 00:18:28.191 "method": "bdev_nvme_attach_controller" 00:18:28.191 }' 00:18:28.191 [2024-05-13 02:59:18.900440] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.900467] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.908449] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.908471] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.916465] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.916486] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.924485] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.924505] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.927380] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:18:28.191 [2024-05-13 02:59:18.927449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350400 ] 00:18:28.191 [2024-05-13 02:59:18.932505] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.932525] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.940528] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.940547] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.948549] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.948568] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.956570] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.956589] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.191 [2024-05-13 02:59:18.962207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:28.191 [2024-05-13 02:59:18.964613] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.964637] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.972632] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.972656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.980656] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.980680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.988676] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.191 [2024-05-13 02:59:18.988710] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.191 [2024-05-13 02:59:18.992668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.452 [2024-05-13 02:59:18.996715] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:18.996754] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.004773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.004806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.012768] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.012792] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.020779] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.020801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.028798] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.028820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.036810] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.036830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.044846] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.044870] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.052887] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.052921] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.060881] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.060902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.068902] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.068923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.076921] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.076941] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.084962] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.084997] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.088890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.452 [2024-05-13 02:59:19.092964] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.092997] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.101003] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.101028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.109068] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.109103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.117077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.117114] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.125117] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.125153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.133146] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.133183] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.141150] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.141188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.149173] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.149208] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.157183] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.157226] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.165193] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.165219] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.173236] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.173269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.181258] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.181293] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.189276] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.189301] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.197276] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.197300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.205298] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.205322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.213332] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.213362] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.221354] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.221381] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.229374] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.229402] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.237393] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.237420] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.245417] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.245442] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.452 [2024-05-13 02:59:19.253443] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.452 [2024-05-13 02:59:19.253468] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.261464] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.261489] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.269485] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.269508] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.277513] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.277540] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.285537] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.285563] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.293574] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.293602] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.301581] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.301606] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.309754] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.309789] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.317629] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.317656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 Running I/O for 5 seconds... 00:18:28.713 [2024-05-13 02:59:19.325650] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.325676] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.341393] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.341438] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.353511] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.353554] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.366575] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.366606] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.379335] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.379367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.391083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.391125] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.404136] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.404179] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.416525] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.416556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.428648] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.428679] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.441660] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.441688] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.453040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.713 [2024-05-13 02:59:19.453069] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.713 [2024-05-13 02:59:19.464674] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.714 [2024-05-13 02:59:19.464711] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.714 [2024-05-13 02:59:19.475965] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.714 [2024-05-13 02:59:19.476003] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.714 [2024-05-13 02:59:19.487557] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.714 [2024-05-13 02:59:19.487584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.714 [2024-05-13 02:59:19.498468] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.714 [2024-05-13 02:59:19.498497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.714 [2024-05-13 02:59:19.509968] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.714 [2024-05-13 02:59:19.510010] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.974 [2024-05-13 02:59:19.521109] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.974 [2024-05-13 02:59:19.521138] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.974 [2024-05-13 02:59:19.532893] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.974 [2024-05-13 02:59:19.532921] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.974 [2024-05-13 02:59:19.543801] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.974 [2024-05-13 02:59:19.543829] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.974 [2024-05-13 02:59:19.555658] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.974 [2024-05-13 02:59:19.555707] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.974 [2024-05-13 02:59:19.567646] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.974 [2024-05-13 02:59:19.567673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.974 [2024-05-13 02:59:19.579104] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.974 [2024-05-13 02:59:19.579131] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.974 [2024-05-13 02:59:19.591241] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.974 [2024-05-13 02:59:19.591277] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.974 [2024-05-13 02:59:19.601815] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.974 [2024-05-13 02:59:19.601843] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.974 [2024-05-13 02:59:19.612929] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.612956] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.624438] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.624476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.636017] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.636045] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.647804] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.647832] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.658561] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.658598] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.671437] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.671464] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.682961] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.683003] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.695572] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.695599] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.706725] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.706771] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.717652] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.717712] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.728956] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.728998] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.740465] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.740501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.752358] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.752385] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.763537] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.763564] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.975 [2024-05-13 02:59:19.774906] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.975 [2024-05-13 02:59:19.774933] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.787857] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.787885] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.800069] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.800096] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.810216] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.810253] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.822586] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.822614] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.833397] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.833435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.844195] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.844233] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.854780] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.854809] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.869076] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.869103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.880227] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.880255] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.891169] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.891196] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.903401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.903429] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.916025] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.916062] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.927857] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.927885] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.939979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.940022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.951010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.951037] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.963161] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.963188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.973794] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.973822] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.985661] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.985709] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:19.996037] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:19.996064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:20.009203] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:20.009238] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:20.020604] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:20.020633] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.236 [2024-05-13 02:59:20.032079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.236 [2024-05-13 02:59:20.032113] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.044619] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.044671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.056101] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.056127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.068168] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.068194] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.079454] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.079481] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.091079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.091114] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.102939] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.102966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.114126] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.114162] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.125854] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.125881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.137033] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.137084] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.149597] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.149632] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.160849] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.160876] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.172291] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.172328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.184624] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.184658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.197202] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.197252] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.209029] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.209056] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.220787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.220815] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.232378] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.232404] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.243212] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.243251] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.254895] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.254923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.266994] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.267021] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.278331] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.278357] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.497 [2024-05-13 02:59:20.290049] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.497 [2024-05-13 02:59:20.290076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.301954] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.302001] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.313208] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.313236] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.324592] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.324627] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.337069] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.337096] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.347882] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.347909] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.359727] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.359770] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.370812] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.370850] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.382727] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.382769] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.393577] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.393603] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.405433] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.405472] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.417087] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.417113] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.428953] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.428993] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.440421] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.440447] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.453079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.453105] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.464401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.464428] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.477055] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.477082] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.487967] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.488019] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.500774] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.500802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.511850] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.511877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.523690] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.523726] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.535045] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.535081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.546883] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.546920] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.758 [2024-05-13 02:59:20.558211] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.758 [2024-05-13 02:59:20.558262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.017 [2024-05-13 02:59:20.570852] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.017 [2024-05-13 02:59:20.570881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.017 [2024-05-13 02:59:20.581495] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.017 [2024-05-13 02:59:20.581531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.017 [2024-05-13 02:59:20.593446] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.017 [2024-05-13 02:59:20.593483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.017 [2024-05-13 02:59:20.604917] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.017 [2024-05-13 02:59:20.604945] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.017 [2024-05-13 02:59:20.616797] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.616825] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.628626] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.628676] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.641123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.641150] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.652444] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.652475] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.663349] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.663384] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.674949] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.674977] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.686221] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.686247] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.698493] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.698518] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.708718] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.708746] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.720720] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.720747] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.731353] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.731378] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.743441] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.743476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.754108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.754135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.766497] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.766523] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.779889] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.779926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.791908] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.791936] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.803148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.803174] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.018 [2024-05-13 02:59:20.814735] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.018 [2024-05-13 02:59:20.814767] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.826351] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.826388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.838024] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.838067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.849549] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.849582] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.861341] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.861391] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.873241] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.873267] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.884996] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.885024] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.896776] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.896804] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.908165] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.908191] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.919952] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.919978] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.931290] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.931316] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.943010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.943061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.277 [2024-05-13 02:59:20.954067] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.277 [2024-05-13 02:59:20.954103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:20.966285] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:20.966312] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:20.977232] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:20.977258] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:20.988238] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:20.988264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:21.000095] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:21.000122] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:21.012425] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:21.012452] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:21.023941] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:21.023995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:21.035995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:21.036022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:21.047502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:21.047528] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:21.059762] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:21.059801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.278 [2024-05-13 02:59:21.072373] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.278 [2024-05-13 02:59:21.072399] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.085168] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.085195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.095884] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.095912] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.109394] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.109446] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.119869] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.119897] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.131826] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.131853] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.143192] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.143241] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.154515] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.154549] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.165823] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.165861] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.177307] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.177333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.187953] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.187996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.200350] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.200377] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.211323] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.211348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.223092] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.223128] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.233665] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.233715] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.245922] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.245960] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.256936] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.256964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.268554] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.268580] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.279387] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.279421] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.290481] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.290516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.301515] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.301540] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.313336] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.313361] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.324718] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.324746] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.536 [2024-05-13 02:59:21.336763] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.536 [2024-05-13 02:59:21.336791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.350684] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.350731] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.362384] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.362410] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.373828] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.373856] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.384866] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.384918] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.397656] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.397714] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.410797] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.410850] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.424118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.424144] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.434589] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.434615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.445509] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.445535] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.456161] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.456189] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.467397] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.467423] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.480199] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.480227] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.491341] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.491380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.502464] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.502491] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.514479] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.514515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.527141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.527181] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.539244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.539270] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.551753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.551796] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.563229] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.563259] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.573822] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.573873] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.796 [2024-05-13 02:59:21.585516] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.796 [2024-05-13 02:59:21.585542] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.797 [2024-05-13 02:59:21.596729] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.797 [2024-05-13 02:59:21.596757] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.056 [2024-05-13 02:59:21.610596] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.056 [2024-05-13 02:59:21.610623] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.056 [2024-05-13 02:59:21.623555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.056 [2024-05-13 02:59:21.623594] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.056 [2024-05-13 02:59:21.636060] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.056 [2024-05-13 02:59:21.636086] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.056 [2024-05-13 02:59:21.647657] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.056 [2024-05-13 02:59:21.647716] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.056 [2024-05-13 02:59:21.661448] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.056 [2024-05-13 02:59:21.661473] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.673311] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.673337] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.685061] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.685087] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.696430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.696456] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.708355] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.708390] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.721030] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.721071] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.732900] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.732928] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.745326] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.745352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.756824] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.756852] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.768239] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.768264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.779073] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.779099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.790576] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.790612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.801874] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.801902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.813826] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.813854] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.825118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.825144] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.836382] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.836418] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.057 [2024-05-13 02:59:21.848081] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.057 [2024-05-13 02:59:21.848123] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.860643] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.860705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.873083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.873108] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.884946] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.884989] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.896211] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.896238] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.908222] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.908248] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.920989] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.921016] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.932889] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.932932] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.945559] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.945585] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.957492] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.957524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.971165] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.971192] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.983666] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.983714] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:21.994737] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:21.994780] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.007066] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.007092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.018867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.018895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.030954] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.030996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.043485] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.043512] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.054585] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.054611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.067554] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.067579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.078512] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.078539] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.090651] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.090694] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.102216] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.102243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.316 [2024-05-13 02:59:22.113566] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.316 [2024-05-13 02:59:22.113594] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.124854] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.124883] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.136600] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.136627] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.147886] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.147913] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.158931] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.158959] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.171482] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.171528] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.182913] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.182949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.196991] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.197018] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.208237] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.208264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.221318] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.221354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.233440] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.233466] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.243835] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.243873] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.257121] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.257147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.268337] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.268388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.279894] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.279936] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.292650] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.292676] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.305542] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.305568] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.317829] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.317857] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.331170] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.331210] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.342461] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.342487] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.355032] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.577 [2024-05-13 02:59:22.355074] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.577 [2024-05-13 02:59:22.367251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.578 [2024-05-13 02:59:22.367303] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.578 [2024-05-13 02:59:22.378226] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.578 [2024-05-13 02:59:22.378254] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.390867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.390895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.402582] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.402608] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.415633] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.415666] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.427584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.427609] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.441169] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.441205] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.453664] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.453725] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.465257] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.465284] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.477438] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.477465] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.488930] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.488966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.501072] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.501098] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.516082] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.516108] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.526789] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.526816] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.539763] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.539805] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.552717] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.552745] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.565759] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.565787] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.577016] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.577043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.590148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.590174] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.602294] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.602321] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.613349] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.613376] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.623964] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.624005] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.838 [2024-05-13 02:59:22.638079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.838 [2024-05-13 02:59:22.638105] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.650602] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.650648] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.663269] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.663295] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.676231] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.676271] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.687279] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.687306] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.698927] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.698956] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.710009] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.710035] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.722532] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.722559] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.733745] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.733773] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.745707] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.745735] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.757647] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.757674] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.097 [2024-05-13 02:59:22.769036] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.097 [2024-05-13 02:59:22.769074] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.780862] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.780889] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.792053] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.792080] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.804206] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.804243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.815295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.815322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.827533] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.827561] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.838117] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.838153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.850299] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.850327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.861553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.861581] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.873149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.873202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.883948] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.883975] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.098 [2024-05-13 02:59:22.895719] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.098 [2024-05-13 02:59:22.895746] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:22.907382] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:22.907420] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:22.919274] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:22.919310] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:22.930903] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:22.930931] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:22.942506] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:22.942532] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:22.954439] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:22.954465] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:22.966101] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:22.966127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:22.977748] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:22.977776] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:22.989867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:22.989895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.001379] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.001406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.015528] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.015555] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.027077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.027104] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.037899] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.037926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.049901] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.049929] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.062722] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.062751] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.075663] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.075724] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.089691] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.089728] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.100591] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.100634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.112456] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.112483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.126237] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.126279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.137456] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.137508] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.358 [2024-05-13 02:59:23.149005] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.358 [2024-05-13 02:59:23.149041] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.160820] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.160848] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.172739] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.172767] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.183592] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.183619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.195936] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.195963] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.207763] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.207791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.221726] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.221769] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.233202] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.233248] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.243906] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.243934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.256302] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.256328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.267788] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.267826] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.278607] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.278632] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.290295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.290322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.303692] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.303727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.315474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.315510] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.330035] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.330087] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.342849] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.342876] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.353944] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.354007] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.366256] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.366282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.377773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.377807] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.389762] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.389790] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.402052] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.402077] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.619 [2024-05-13 02:59:23.413633] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.619 [2024-05-13 02:59:23.413658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.425232] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.425274] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.437549] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.437575] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.449615] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.449649] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.462564] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.462591] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.474291] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.474326] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.486376] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.486402] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.497163] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.497189] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.510564] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.510590] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.524600] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.524625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.537174] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.537200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.549232] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.549258] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.561326] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.561353] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.574150] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.574176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.585014] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.585064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.595941] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.595968] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.608148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.608173] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.619445] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.619472] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.631137] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.631163] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.642887] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.642914] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.653937] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.653964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.665937] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.665965] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.880 [2024-05-13 02:59:23.676955] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.880 [2024-05-13 02:59:23.677011] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.689301] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.689336] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.703562] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.703590] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.715647] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.715690] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.726613] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.726663] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.739256] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.739296] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.749949] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.750004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.762738] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.762766] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.774404] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.774441] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.786257] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.786283] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.797525] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.797550] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.809576] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.809602] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.821265] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.821301] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.834909] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.834951] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.847149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.847189] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.860855] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.860882] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.873820] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.873849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.884460] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.884485] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.895840] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.895879] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.906251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.906277] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.918946] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.918992] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.139 [2024-05-13 02:59:23.930792] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.139 [2024-05-13 02:59:23.930820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.410 [2024-05-13 02:59:23.944606] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:23.944634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:23.955395] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:23.955422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:23.968485] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:23.968524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:23.979440] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:23.979466] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:23.993410] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:23.993436] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.004517] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.004550] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.016046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.016083] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.029786] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.029825] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.041711] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.041739] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.055956] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.056001] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.070998] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.071025] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.080897] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.080925] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.093389] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.093415] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.105084] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.105124] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.117509] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.117544] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.128300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.128325] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.141460] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.141486] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.153151] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.153200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.165520] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.165546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.178011] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.178037] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.189714] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.189742] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.411 [2024-05-13 02:59:24.201757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.411 [2024-05-13 02:59:24.201785] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.214224] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.214261] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.224566] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.224609] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.237331] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.237365] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.248885] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.248913] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.260704] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.260732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.272169] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.272196] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.285989] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.286031] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.297364] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.297401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.309114] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.309154] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.320611] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.320648] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.331916] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.331944] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.343765] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.343802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.348107] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.348135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 00:18:33.676 Latency(us) 00:18:33.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.676 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:33.676 Nvme1n1 : 5.01 10655.75 83.25 0.00 0.00 11992.13 4029.25 24758.04 00:18:33.676 =================================================================================================================== 00:18:33.676 Total : 10655.75 83.25 0.00 0.00 11992.13 4029.25 24758.04 00:18:33.676 [2024-05-13 02:59:24.356118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.356147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.364133] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.364159] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.372222] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.372272] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.380227] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.380268] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.388253] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.388300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.396266] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.396320] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.404296] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.404342] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.412316] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.412357] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.420343] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.420389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.428365] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.428405] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.436378] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.436425] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.444397] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.444439] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.452421] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.452467] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.460445] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.460487] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.468465] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.468511] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.676 [2024-05-13 02:59:24.476474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.676 [2024-05-13 02:59:24.476512] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.484474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.484503] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.492532] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.492571] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.500564] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.500610] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.508577] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.508618] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.516561] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.516588] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.524636] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.524677] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.532644] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.532689] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.540672] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.540719] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.548643] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.548668] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.556664] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.556688] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 [2024-05-13 02:59:24.564687] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.936 [2024-05-13 02:59:24.564719] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (350400) - No such process 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 350400 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:33.936 delay0 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.936 02:59:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:33.936 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.936 [2024-05-13 02:59:24.692532] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:40.507 Initializing NVMe Controllers 00:18:40.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:40.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:40.507 Initialization complete. Launching workers. 00:18:40.507 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 81 00:18:40.507 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 362, failed to submit 39 00:18:40.507 success 193, unsuccess 169, failed 0 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.507 rmmod nvme_tcp 00:18:40.507 rmmod nvme_fabrics 00:18:40.507 rmmod nvme_keyring 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 349071 ']' 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 349071 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 349071 ']' 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 349071 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 349071 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 349071' 00:18:40.507 killing process with pid 349071 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 349071 00:18:40.507 [2024-05-13 02:59:30.924107] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:40.507 02:59:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 349071 00:18:40.507 02:59:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.507 02:59:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.507 02:59:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.507 02:59:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.507 02:59:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.507 02:59:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.507 02:59:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.507 02:59:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.415 02:59:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.415 00:18:42.415 real 0m27.471s 00:18:42.415 user 0m39.801s 00:18:42.415 sys 0m8.489s 00:18:42.415 02:59:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:42.415 02:59:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:42.415 ************************************ 00:18:42.415 END TEST nvmf_zcopy 00:18:42.415 ************************************ 00:18:42.674 02:59:33 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:42.674 02:59:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:42.674 02:59:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:42.674 02:59:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.674 ************************************ 00:18:42.674 START TEST nvmf_nmic 00:18:42.674 ************************************ 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:42.674 * Looking for test storage... 00:18:42.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.674 02:59:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.577 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:44.578 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:44.578 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:44.578 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:44.578 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.578 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:44.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:18:44.838 00:18:44.838 --- 10.0.0.2 ping statistics --- 00:18:44.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.838 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:18:44.838 00:18:44.838 --- 10.0.0.1 ping statistics --- 00:18:44.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.838 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=353655 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 353655 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 353655 ']' 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:44.838 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:44.838 [2024-05-13 02:59:35.536137] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:18:44.838 [2024-05-13 02:59:35.536224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.838 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.838 [2024-05-13 02:59:35.574069] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:44.838 [2024-05-13 02:59:35.606151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.097 [2024-05-13 02:59:35.698134] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.097 [2024-05-13 02:59:35.698193] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.097 [2024-05-13 02:59:35.698213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.097 [2024-05-13 02:59:35.698227] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.097 [2024-05-13 02:59:35.698239] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.097 [2024-05-13 02:59:35.698328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.097 [2024-05-13 02:59:35.698377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.097 [2024-05-13 02:59:35.698479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.097 [2024-05-13 02:59:35.698482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.097 [2024-05-13 02:59:35.841292] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.097 Malloc0 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.097 [2024-05-13 02:59:35.892046] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:45.097 [2024-05-13 02:59:35.892327] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:45.097 test case1: single bdev can't be used in multiple subsystems 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.097 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.355 [2024-05-13 02:59:35.916164] bdev.c:8011:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:45.355 [2024-05-13 02:59:35.916192] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:45.355 [2024-05-13 02:59:35.916207] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.355 request: 00:18:45.355 { 00:18:45.355 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.355 "namespace": { 00:18:45.355 "bdev_name": "Malloc0", 00:18:45.355 "no_auto_visible": false 00:18:45.355 }, 00:18:45.355 "method": "nvmf_subsystem_add_ns", 00:18:45.355 "req_id": 1 00:18:45.355 } 00:18:45.355 Got JSON-RPC error response 00:18:45.355 response: 00:18:45.355 { 00:18:45.355 "code": -32602, 00:18:45.355 "message": "Invalid parameters" 00:18:45.355 } 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:45.355 Adding namespace failed - expected result. 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:45.355 test case2: host connect to nvmf target in multiple paths 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.355 [2024-05-13 02:59:35.924271] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.355 02:59:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:45.925 02:59:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:46.494 02:59:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:46.494 02:59:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:46.494 02:59:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.494 02:59:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:46.494 02:59:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:48.400 02:59:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:48.400 02:59:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:48.400 02:59:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:48.400 02:59:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:48.400 02:59:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.400 02:59:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:48.400 02:59:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:48.400 [global] 00:18:48.400 thread=1 00:18:48.400 invalidate=1 00:18:48.400 rw=write 00:18:48.400 time_based=1 00:18:48.400 runtime=1 00:18:48.400 ioengine=libaio 00:18:48.400 direct=1 00:18:48.400 bs=4096 00:18:48.400 iodepth=1 00:18:48.400 norandommap=0 00:18:48.400 numjobs=1 00:18:48.400 00:18:48.400 verify_dump=1 00:18:48.400 verify_backlog=512 00:18:48.400 verify_state_save=0 00:18:48.400 do_verify=1 00:18:48.400 verify=crc32c-intel 00:18:48.400 [job0] 00:18:48.400 filename=/dev/nvme0n1 00:18:48.400 Could not set queue depth (nvme0n1) 00:18:48.658 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.658 fio-3.35 00:18:48.658 Starting 1 thread 00:18:50.034 00:18:50.034 job0: (groupid=0, jobs=1): err= 0: pid=354291: Mon May 13 02:59:40 2024 00:18:50.034 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:50.034 slat (nsec): min=5887, max=80502, avg=19720.14, stdev=8303.20 00:18:50.034 clat (usec): min=519, max=823, avg=591.40, stdev=38.48 00:18:50.034 lat (usec): min=533, max=856, avg=611.12, stdev=43.49 00:18:50.034 clat percentiles (usec): 00:18:50.034 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 553], 20.00th=[ 562], 00:18:50.034 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 594], 00:18:50.034 | 70.00th=[ 603], 80.00th=[ 611], 90.00th=[ 635], 95.00th=[ 668], 00:18:50.034 | 99.00th=[ 734], 99.50th=[ 758], 99.90th=[ 783], 99.95th=[ 824], 00:18:50.034 | 99.99th=[ 824] 00:18:50.034 write: IOPS=1107, BW=4432KiB/s (4538kB/s)(4436KiB/1001msec); 0 zone resets 00:18:50.034 slat (nsec): min=6961, max=64551, avg=18173.41, stdev=9589.41 00:18:50.034 clat (usec): min=259, max=1473, avg=308.54, stdev=58.73 00:18:50.034 lat (usec): min=267, max=1486, avg=326.72, stdev=63.04 00:18:50.034 clat percentiles (usec): 00:18:50.034 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 269], 20.00th=[ 277], 00:18:50.034 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:18:50.034 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 363], 95.00th=[ 388], 00:18:50.034 | 99.00th=[ 449], 99.50th=[ 502], 99.90th=[ 1221], 99.95th=[ 1467], 00:18:50.034 | 99.99th=[ 1467] 00:18:50.035 bw ( KiB/s): min= 4840, max= 4840, per=100.00%, avg=4840.00, stdev= 0.00, samples=1 00:18:50.035 iops : min= 1210, max= 1210, avg=1210.00, stdev= 0.00, samples=1 00:18:50.035 lat (usec) : 500=51.71%, 750=47.91%, 1000=0.28% 00:18:50.035 lat (msec) : 2=0.09% 00:18:50.035 cpu : usr=2.60%, sys=5.80%, ctx=2133, majf=0, minf=2 00:18:50.035 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.035 issued rwts: total=1024,1109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.035 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.035 00:18:50.035 Run status group 0 (all jobs): 00:18:50.035 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:18:50.035 WRITE: bw=4432KiB/s (4538kB/s), 4432KiB/s-4432KiB/s (4538kB/s-4538kB/s), io=4436KiB (4542kB), run=1001-1001msec 00:18:50.035 00:18:50.035 Disk stats (read/write): 00:18:50.035 nvme0n1: ios=947/1024, merge=0/0, ticks=568/293, in_queue=861, util=92.79% 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.035 rmmod nvme_tcp 00:18:50.035 rmmod nvme_fabrics 00:18:50.035 rmmod nvme_keyring 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 353655 ']' 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 353655 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 353655 ']' 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 353655 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 353655 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 353655' 00:18:50.035 killing process with pid 353655 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 353655 00:18:50.035 [2024-05-13 02:59:40.722425] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:50.035 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 353655 00:18:50.293 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:50.293 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:50.293 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:50.293 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.293 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.293 02:59:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.293 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.293 02:59:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.200 02:59:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:52.459 00:18:52.459 real 0m9.740s 00:18:52.459 user 0m21.708s 00:18:52.459 sys 0m2.415s 00:18:52.459 02:59:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:52.460 02:59:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:52.460 ************************************ 00:18:52.460 END TEST nvmf_nmic 00:18:52.460 ************************************ 00:18:52.460 02:59:43 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:52.460 02:59:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:52.460 02:59:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:52.460 02:59:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.460 ************************************ 00:18:52.460 START TEST nvmf_fio_target 00:18:52.460 ************************************ 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:52.460 * Looking for test storage... 00:18:52.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.460 02:59:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:54.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.361 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:54.362 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:54.362 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:54.362 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.362 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:54.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:18:54.623 00:18:54.623 --- 10.0.0.2 ping statistics --- 00:18:54.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.623 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:18:54.623 00:18:54.623 --- 10.0.0.1 ping statistics --- 00:18:54.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.623 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=356364 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 356364 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 356364 ']' 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:54.623 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.624 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:54.624 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.624 [2024-05-13 02:59:45.244810] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:18:54.624 [2024-05-13 02:59:45.244886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.624 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.624 [2024-05-13 02:59:45.283959] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:54.624 [2024-05-13 02:59:45.310573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:54.624 [2024-05-13 02:59:45.404758] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.624 [2024-05-13 02:59:45.404806] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.624 [2024-05-13 02:59:45.404835] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.624 [2024-05-13 02:59:45.404846] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.624 [2024-05-13 02:59:45.404857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.624 [2024-05-13 02:59:45.404941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.624 [2024-05-13 02:59:45.404975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.624 [2024-05-13 02:59:45.405025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:54.624 [2024-05-13 02:59:45.405027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.913 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:54.913 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:54.913 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.913 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.913 02:59:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.913 02:59:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.913 02:59:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:55.172 [2024-05-13 02:59:45.830473] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.172 02:59:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:55.430 02:59:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:55.430 02:59:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:55.688 02:59:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:55.688 02:59:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:55.945 02:59:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:56.202 02:59:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:56.460 02:59:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:56.460 02:59:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:56.717 02:59:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:56.974 02:59:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:56.974 02:59:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.232 02:59:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:57.232 02:59:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.489 02:59:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:57.489 02:59:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:57.746 02:59:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:58.004 02:59:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:58.004 02:59:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.261 02:59:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:58.261 02:59:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:58.518 02:59:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.518 [2024-05-13 02:59:49.291691] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:58.518 [2024-05-13 02:59:49.291999] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.518 02:59:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:58.775 02:59:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:59.033 02:59:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:59.966 02:59:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:59.966 02:59:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:59.966 02:59:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.966 02:59:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:59.966 02:59:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:59.966 02:59:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:01.865 02:59:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:01.865 02:59:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:01.865 02:59:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:01.865 02:59:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:01.865 02:59:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.865 02:59:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:01.865 02:59:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:01.865 [global] 00:19:01.865 thread=1 00:19:01.865 invalidate=1 00:19:01.865 rw=write 00:19:01.865 time_based=1 00:19:01.865 runtime=1 00:19:01.865 ioengine=libaio 00:19:01.865 direct=1 00:19:01.865 bs=4096 00:19:01.865 iodepth=1 00:19:01.865 norandommap=0 00:19:01.865 numjobs=1 00:19:01.865 00:19:01.865 verify_dump=1 00:19:01.865 verify_backlog=512 00:19:01.865 verify_state_save=0 00:19:01.865 do_verify=1 00:19:01.865 verify=crc32c-intel 00:19:01.865 [job0] 00:19:01.865 filename=/dev/nvme0n1 00:19:01.865 [job1] 00:19:01.865 filename=/dev/nvme0n2 00:19:01.865 [job2] 00:19:01.865 filename=/dev/nvme0n3 00:19:01.865 [job3] 00:19:01.865 filename=/dev/nvme0n4 00:19:01.865 Could not set queue depth (nvme0n1) 00:19:01.865 Could not set queue depth (nvme0n2) 00:19:01.865 Could not set queue depth (nvme0n3) 00:19:01.865 Could not set queue depth (nvme0n4) 00:19:02.122 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.122 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.122 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.122 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.122 fio-3.35 00:19:02.122 Starting 4 threads 00:19:03.511 00:19:03.511 job0: (groupid=0, jobs=1): err= 0: pid=357319: Mon May 13 02:59:53 2024 00:19:03.511 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:03.511 slat (nsec): min=7253, max=60584, avg=17642.61, stdev=6733.41 00:19:03.511 clat (usec): min=386, max=1074, avg=505.03, stdev=50.96 00:19:03.511 lat (usec): min=402, max=1097, avg=522.68, stdev=52.25 00:19:03.511 clat percentiles (usec): 00:19:03.511 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 437], 20.00th=[ 474], 00:19:03.511 | 30.00th=[ 486], 40.00th=[ 494], 50.00th=[ 502], 60.00th=[ 510], 00:19:03.511 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 594], 00:19:03.511 | 99.00th=[ 635], 99.50th=[ 668], 99.90th=[ 693], 99.95th=[ 1074], 00:19:03.511 | 99.99th=[ 1074] 00:19:03.511 write: IOPS=1212, BW=4851KiB/s (4968kB/s)(4856KiB/1001msec); 0 zone resets 00:19:03.511 slat (nsec): min=7610, max=75088, avg=18387.94, stdev=9746.18 00:19:03.512 clat (usec): min=257, max=1287, avg=355.66, stdev=63.36 00:19:03.512 lat (usec): min=267, max=1306, avg=374.04, stdev=67.39 00:19:03.512 clat percentiles (usec): 00:19:03.512 | 1.00th=[ 265], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 306], 00:19:03.512 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 363], 00:19:03.512 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 433], 00:19:03.512 | 99.00th=[ 478], 99.50th=[ 578], 99.90th=[ 1090], 99.95th=[ 1287], 00:19:03.512 | 99.99th=[ 1287] 00:19:03.512 bw ( KiB/s): min= 4208, max= 4208, per=33.15%, avg=4208.00, stdev= 0.00, samples=1 00:19:03.512 iops : min= 1052, max= 1052, avg=1052.00, stdev= 0.00, samples=1 00:19:03.512 lat (usec) : 500=75.20%, 750=24.62%, 1000=0.04% 00:19:03.512 lat (msec) : 2=0.13% 00:19:03.512 cpu : usr=2.30%, sys=3.90%, ctx=2239, majf=0, minf=1 00:19:03.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.512 issued rwts: total=1024,1214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.512 job1: (groupid=0, jobs=1): err= 0: pid=357329: Mon May 13 02:59:53 2024 00:19:03.512 read: IOPS=30, BW=124KiB/s (126kB/s)(124KiB/1004msec) 00:19:03.512 slat (nsec): min=7593, max=46270, avg=16472.35, stdev=6693.27 00:19:03.512 clat (usec): min=538, max=41106, avg=25329.08, stdev=19998.60 00:19:03.512 lat (usec): min=553, max=41121, avg=25345.56, stdev=20000.51 00:19:03.512 clat percentiles (usec): 00:19:03.512 | 1.00th=[ 537], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[ 570], 00:19:03.512 | 30.00th=[ 586], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:19:03.512 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:03.512 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:03.512 | 99.99th=[41157] 00:19:03.512 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:19:03.512 slat (nsec): min=8120, max=73262, avg=24347.55, stdev=12419.01 00:19:03.512 clat (usec): min=332, max=1814, avg=395.25, stdev=91.10 00:19:03.512 lat (usec): min=341, max=1855, avg=419.60, stdev=93.92 00:19:03.512 clat percentiles (usec): 00:19:03.512 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:19:03.512 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 396], 00:19:03.512 | 70.00th=[ 404], 80.00th=[ 416], 90.00th=[ 433], 95.00th=[ 449], 00:19:03.512 | 99.00th=[ 553], 99.50th=[ 1074], 99.90th=[ 1811], 99.95th=[ 1811], 00:19:03.512 | 99.99th=[ 1811] 00:19:03.512 bw ( KiB/s): min= 4096, max= 4096, per=32.27%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.512 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.512 lat (usec) : 500=91.90%, 750=4.05% 00:19:03.512 lat (msec) : 2=0.55%, 50=3.50% 00:19:03.512 cpu : usr=0.40%, sys=1.40%, ctx=544, majf=0, minf=2 00:19:03.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.512 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.512 job2: (groupid=0, jobs=1): err= 0: pid=357362: Mon May 13 02:59:53 2024 00:19:03.512 read: IOPS=999, BW=3996KiB/s (4092kB/s)(4000KiB/1001msec) 00:19:03.512 slat (nsec): min=7494, max=55422, avg=14378.71, stdev=6647.84 00:19:03.512 clat (usec): min=458, max=811, avg=599.85, stdev=51.79 00:19:03.512 lat (usec): min=472, max=828, avg=614.23, stdev=55.36 00:19:03.512 clat percentiles (usec): 00:19:03.512 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 545], 20.00th=[ 553], 00:19:03.512 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 603], 00:19:03.512 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 685], 95.00th=[ 701], 00:19:03.512 | 99.00th=[ 742], 99.50th=[ 766], 99.90th=[ 816], 99.95th=[ 816], 00:19:03.512 | 99.99th=[ 816] 00:19:03.512 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:03.512 slat (nsec): min=9259, max=66944, avg=15519.13, stdev=8030.32 00:19:03.512 clat (usec): min=260, max=1866, avg=352.62, stdev=75.57 00:19:03.512 lat (usec): min=271, max=1884, avg=368.14, stdev=78.38 00:19:03.512 clat percentiles (usec): 00:19:03.512 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 310], 00:19:03.512 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:19:03.512 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 412], 95.00th=[ 449], 00:19:03.512 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 1401], 99.95th=[ 1860], 00:19:03.512 | 99.99th=[ 1860] 00:19:03.512 bw ( KiB/s): min= 4096, max= 4096, per=32.27%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.512 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.512 lat (usec) : 500=49.90%, 750=49.70%, 1000=0.30% 00:19:03.512 lat (msec) : 2=0.10% 00:19:03.512 cpu : usr=1.90%, sys=4.50%, ctx=2026, majf=0, minf=1 00:19:03.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.512 issued rwts: total=1000,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.512 job3: (groupid=0, jobs=1): err= 0: pid=357375: Mon May 13 02:59:53 2024 00:19:03.512 read: IOPS=19, BW=77.8KiB/s (79.7kB/s)(80.0KiB/1028msec) 00:19:03.512 slat (nsec): min=9623, max=42216, avg=16824.75, stdev=6192.65 00:19:03.512 clat (usec): min=40876, max=42037, avg=41481.05, stdev=520.40 00:19:03.512 lat (usec): min=40892, max=42052, avg=41497.88, stdev=519.48 00:19:03.512 clat percentiles (usec): 00:19:03.512 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:03.512 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:19:03.512 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:03.512 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:03.512 | 99.99th=[42206] 00:19:03.512 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:19:03.512 slat (nsec): min=7218, max=81964, avg=24442.69, stdev=12862.67 00:19:03.512 clat (usec): min=262, max=1715, avg=356.19, stdev=86.34 00:19:03.512 lat (usec): min=278, max=1753, avg=380.63, stdev=87.11 00:19:03.512 clat percentiles (usec): 00:19:03.512 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 310], 00:19:03.512 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 359], 00:19:03.512 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 441], 00:19:03.512 | 99.00th=[ 490], 99.50th=[ 603], 99.90th=[ 1713], 99.95th=[ 1713], 00:19:03.512 | 99.99th=[ 1713] 00:19:03.512 bw ( KiB/s): min= 4096, max= 4096, per=32.27%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.512 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.512 lat (usec) : 500=95.49%, 750=0.38% 00:19:03.512 lat (msec) : 2=0.38%, 50=3.76% 00:19:03.512 cpu : usr=0.10%, sys=1.66%, ctx=533, majf=0, minf=1 00:19:03.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.512 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.512 00:19:03.512 Run status group 0 (all jobs): 00:19:03.512 READ: bw=8074KiB/s (8268kB/s), 77.8KiB/s-4092KiB/s (79.7kB/s-4190kB/s), io=8300KiB (8499kB), run=1001-1028msec 00:19:03.512 WRITE: bw=12.4MiB/s (13.0MB/s), 1992KiB/s-4851KiB/s (2040kB/s-4968kB/s), io=12.7MiB (13.4MB), run=1001-1028msec 00:19:03.512 00:19:03.512 Disk stats (read/write): 00:19:03.512 nvme0n1: ios=875/1024, merge=0/0, ticks=1249/358, in_queue=1607, util=85.07% 00:19:03.512 nvme0n2: ios=76/512, merge=0/0, ticks=1534/200, in_queue=1734, util=89.00% 00:19:03.512 nvme0n3: ios=809/1024, merge=0/0, ticks=843/343, in_queue=1186, util=95.07% 00:19:03.512 nvme0n4: ios=72/512, merge=0/0, ticks=1099/177, in_queue=1276, util=94.28% 00:19:03.512 02:59:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:03.512 [global] 00:19:03.512 thread=1 00:19:03.512 invalidate=1 00:19:03.512 rw=randwrite 00:19:03.512 time_based=1 00:19:03.512 runtime=1 00:19:03.512 ioengine=libaio 00:19:03.512 direct=1 00:19:03.512 bs=4096 00:19:03.512 iodepth=1 00:19:03.512 norandommap=0 00:19:03.512 numjobs=1 00:19:03.512 00:19:03.512 verify_dump=1 00:19:03.512 verify_backlog=512 00:19:03.512 verify_state_save=0 00:19:03.512 do_verify=1 00:19:03.512 verify=crc32c-intel 00:19:03.512 [job0] 00:19:03.512 filename=/dev/nvme0n1 00:19:03.512 [job1] 00:19:03.512 filename=/dev/nvme0n2 00:19:03.512 [job2] 00:19:03.512 filename=/dev/nvme0n3 00:19:03.512 [job3] 00:19:03.512 filename=/dev/nvme0n4 00:19:03.512 Could not set queue depth (nvme0n1) 00:19:03.512 Could not set queue depth (nvme0n2) 00:19:03.512 Could not set queue depth (nvme0n3) 00:19:03.512 Could not set queue depth (nvme0n4) 00:19:03.513 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.513 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.513 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.513 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.513 fio-3.35 00:19:03.513 Starting 4 threads 00:19:04.886 00:19:04.886 job0: (groupid=0, jobs=1): err= 0: pid=357662: Mon May 13 02:59:55 2024 00:19:04.886 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:04.886 slat (nsec): min=6490, max=60143, avg=19233.19, stdev=8499.86 00:19:04.886 clat (usec): min=531, max=41016, avg=1070.93, stdev=4347.21 00:19:04.886 lat (usec): min=546, max=41050, avg=1090.17, stdev=4347.48 00:19:04.886 clat percentiles (usec): 00:19:04.886 | 1.00th=[ 545], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[ 570], 00:19:04.886 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 594], 00:19:04.886 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 660], 95.00th=[ 668], 00:19:04.886 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:04.886 | 99.99th=[41157] 00:19:04.886 write: IOPS=802, BW=3209KiB/s (3286kB/s)(3212KiB/1001msec); 0 zone resets 00:19:04.886 slat (nsec): min=7127, max=74697, avg=25420.74, stdev=13203.29 00:19:04.886 clat (usec): min=254, max=932, avg=514.85, stdev=115.99 00:19:04.886 lat (usec): min=262, max=957, avg=540.27, stdev=121.29 00:19:04.886 clat percentiles (usec): 00:19:04.886 | 1.00th=[ 281], 5.00th=[ 306], 10.00th=[ 392], 20.00th=[ 449], 00:19:04.886 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 502], 00:19:04.886 | 70.00th=[ 537], 80.00th=[ 619], 90.00th=[ 693], 95.00th=[ 725], 00:19:04.886 | 99.00th=[ 816], 99.50th=[ 873], 99.90th=[ 930], 99.95th=[ 930], 00:19:04.886 | 99.99th=[ 930] 00:19:04.886 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.886 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.886 lat (usec) : 500=36.50%, 750=60.38%, 1000=2.66% 00:19:04.886 lat (msec) : 50=0.46% 00:19:04.886 cpu : usr=1.80%, sys=2.90%, ctx=1318, majf=0, minf=1 00:19:04.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.886 issued rwts: total=512,803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.886 job1: (groupid=0, jobs=1): err= 0: pid=357663: Mon May 13 02:59:55 2024 00:19:04.886 read: IOPS=167, BW=669KiB/s (685kB/s)(684KiB/1022msec) 00:19:04.886 slat (nsec): min=5971, max=42187, avg=22733.49, stdev=10150.79 00:19:04.886 clat (usec): min=402, max=41974, avg=5011.53, stdev=12811.81 00:19:04.886 lat (usec): min=424, max=41996, avg=5034.26, stdev=12809.85 00:19:04.886 clat percentiles (usec): 00:19:04.886 | 1.00th=[ 412], 5.00th=[ 433], 10.00th=[ 453], 20.00th=[ 465], 00:19:04.886 | 30.00th=[ 474], 40.00th=[ 478], 50.00th=[ 490], 60.00th=[ 498], 00:19:04.886 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[41157], 95.00th=[41157], 00:19:04.886 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:04.886 | 99.99th=[42206] 00:19:04.886 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:19:04.886 slat (nsec): min=5913, max=63993, avg=16822.59, stdev=8631.31 00:19:04.886 clat (usec): min=255, max=451, avg=290.26, stdev=28.73 00:19:04.886 lat (usec): min=267, max=468, avg=307.08, stdev=28.81 00:19:04.886 clat percentiles (usec): 00:19:04.886 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:19:04.886 | 30.00th=[ 277], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:19:04.887 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 355], 00:19:04.887 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 453], 99.95th=[ 453], 00:19:04.887 | 99.99th=[ 453] 00:19:04.887 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.887 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.887 lat (usec) : 500=90.34%, 750=6.88% 00:19:04.887 lat (msec) : 50=2.78% 00:19:04.887 cpu : usr=0.78%, sys=1.08%, ctx=683, majf=0, minf=1 00:19:04.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.887 issued rwts: total=171,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.887 job2: (groupid=0, jobs=1): err= 0: pid=357670: Mon May 13 02:59:55 2024 00:19:04.887 read: IOPS=19, BW=78.0KiB/s (79.8kB/s)(80.0KiB/1026msec) 00:19:04.887 slat (nsec): min=10316, max=36860, avg=19288.90, stdev=8608.76 00:19:04.887 clat (usec): min=40911, max=42510, avg=41202.13, stdev=476.80 00:19:04.887 lat (usec): min=40937, max=42544, avg=41221.41, stdev=480.15 00:19:04.887 clat percentiles (usec): 00:19:04.887 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:04.887 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:04.887 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:04.887 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:04.887 | 99.99th=[42730] 00:19:04.887 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:19:04.887 slat (nsec): min=8156, max=78879, avg=25574.40, stdev=12913.40 00:19:04.887 clat (usec): min=264, max=670, avg=360.28, stdev=58.53 00:19:04.887 lat (usec): min=273, max=711, avg=385.85, stdev=62.87 00:19:04.887 clat percentiles (usec): 00:19:04.887 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 310], 00:19:04.887 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 367], 00:19:04.887 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 445], 95.00th=[ 469], 00:19:04.887 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 668], 99.95th=[ 668], 00:19:04.887 | 99.99th=[ 668] 00:19:04.887 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.887 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.887 lat (usec) : 500=95.11%, 750=1.13% 00:19:04.887 lat (msec) : 50=3.76% 00:19:04.887 cpu : usr=0.49%, sys=2.05%, ctx=533, majf=0, minf=2 00:19:04.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.887 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.887 job3: (groupid=0, jobs=1): err= 0: pid=357673: Mon May 13 02:59:55 2024 00:19:04.887 read: IOPS=18, BW=73.0KiB/s (74.8kB/s)(76.0KiB/1041msec) 00:19:04.887 slat (nsec): min=12497, max=34895, avg=21015.00, stdev=8769.52 00:19:04.887 clat (usec): min=40809, max=42065, avg=41223.23, stdev=472.91 00:19:04.887 lat (usec): min=40843, max=42078, avg=41244.25, stdev=469.85 00:19:04.887 clat percentiles (usec): 00:19:04.887 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:04.887 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:04.887 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:04.887 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:04.887 | 99.99th=[42206] 00:19:04.887 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:19:04.887 slat (nsec): min=7659, max=84779, avg=27647.68, stdev=14053.91 00:19:04.887 clat (usec): min=262, max=738, avg=467.29, stdev=86.72 00:19:04.887 lat (usec): min=273, max=764, avg=494.94, stdev=89.88 00:19:04.887 clat percentiles (usec): 00:19:04.887 | 1.00th=[ 273], 5.00th=[ 297], 10.00th=[ 363], 20.00th=[ 404], 00:19:04.887 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 469], 60.00th=[ 490], 00:19:04.887 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 578], 95.00th=[ 611], 00:19:04.887 | 99.00th=[ 685], 99.50th=[ 717], 99.90th=[ 742], 99.95th=[ 742], 00:19:04.887 | 99.99th=[ 742] 00:19:04.887 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.887 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.887 lat (usec) : 500=64.78%, 750=31.64% 00:19:04.887 lat (msec) : 50=3.58% 00:19:04.887 cpu : usr=0.58%, sys=1.44%, ctx=533, majf=0, minf=1 00:19:04.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.887 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.887 00:19:04.887 Run status group 0 (all jobs): 00:19:04.887 READ: bw=2774KiB/s (2841kB/s), 73.0KiB/s-2046KiB/s (74.8kB/s-2095kB/s), io=2888KiB (2957kB), run=1001-1041msec 00:19:04.887 WRITE: bw=8988KiB/s (9203kB/s), 1967KiB/s-3209KiB/s (2015kB/s-3286kB/s), io=9356KiB (9581kB), run=1001-1041msec 00:19:04.887 00:19:04.887 Disk stats (read/write): 00:19:04.887 nvme0n1: ios=503/512, merge=0/0, ticks=1482/277, in_queue=1759, util=93.89% 00:19:04.887 nvme0n2: ios=216/512, merge=0/0, ticks=739/142, in_queue=881, util=91.88% 00:19:04.887 nvme0n3: ios=66/512, merge=0/0, ticks=1029/158, in_queue=1187, util=96.87% 00:19:04.887 nvme0n4: ios=38/512, merge=0/0, ticks=1524/227, in_queue=1751, util=98.00% 00:19:04.887 02:59:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:04.887 [global] 00:19:04.887 thread=1 00:19:04.887 invalidate=1 00:19:04.887 rw=write 00:19:04.887 time_based=1 00:19:04.887 runtime=1 00:19:04.887 ioengine=libaio 00:19:04.887 direct=1 00:19:04.887 bs=4096 00:19:04.887 iodepth=128 00:19:04.887 norandommap=0 00:19:04.887 numjobs=1 00:19:04.887 00:19:04.887 verify_dump=1 00:19:04.887 verify_backlog=512 00:19:04.887 verify_state_save=0 00:19:04.887 do_verify=1 00:19:04.887 verify=crc32c-intel 00:19:04.887 [job0] 00:19:04.887 filename=/dev/nvme0n1 00:19:04.887 [job1] 00:19:04.887 filename=/dev/nvme0n2 00:19:04.887 [job2] 00:19:04.887 filename=/dev/nvme0n3 00:19:04.887 [job3] 00:19:04.887 filename=/dev/nvme0n4 00:19:04.887 Could not set queue depth (nvme0n1) 00:19:04.887 Could not set queue depth (nvme0n2) 00:19:04.887 Could not set queue depth (nvme0n3) 00:19:04.887 Could not set queue depth (nvme0n4) 00:19:04.887 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:04.887 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:04.887 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:04.887 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:04.887 fio-3.35 00:19:04.887 Starting 4 threads 00:19:06.263 00:19:06.263 job0: (groupid=0, jobs=1): err= 0: pid=357898: Mon May 13 02:59:56 2024 00:19:06.263 read: IOPS=1395, BW=5584KiB/s (5718kB/s)(6008KiB/1076msec) 00:19:06.263 slat (usec): min=2, max=219271, avg=510.03, stdev=7350.70 00:19:06.263 clat (usec): min=1979, max=439166, avg=67321.23, stdev=112006.20 00:19:06.263 lat (msec): min=6, max=439, avg=67.83, stdev=112.67 00:19:06.263 clat percentiles (msec): 00:19:06.263 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:19:06.263 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 20], 00:19:06.263 | 70.00th=[ 26], 80.00th=[ 68], 90.00th=[ 205], 95.00th=[ 380], 00:19:06.263 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 439], 99.95th=[ 439], 00:19:06.263 | 99.99th=[ 439] 00:19:06.263 write: IOPS=1427, BW=5710KiB/s (5847kB/s)(6144KiB/1076msec); 0 zone resets 00:19:06.263 slat (usec): min=3, max=10298, avg=150.01, stdev=792.55 00:19:06.263 clat (usec): min=2418, max=51680, avg=22818.51, stdev=10986.28 00:19:06.263 lat (usec): min=2485, max=51693, avg=22968.52, stdev=11045.53 00:19:06.263 clat percentiles (usec): 00:19:06.263 | 1.00th=[ 6980], 5.00th=[ 8160], 10.00th=[10814], 20.00th=[12256], 00:19:06.263 | 30.00th=[15664], 40.00th=[19530], 50.00th=[21627], 60.00th=[22938], 00:19:06.263 | 70.00th=[26870], 80.00th=[31589], 90.00th=[39060], 95.00th=[46400], 00:19:06.263 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:19:06.263 | 99.99th=[51643] 00:19:06.263 bw ( KiB/s): min=12288, max=12288, per=30.32%, avg=12288.00, stdev= 0.00, samples=1 00:19:06.263 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:06.263 lat (msec) : 2=0.03%, 4=0.46%, 10=4.18%, 20=46.77%, 50=37.95% 00:19:06.263 lat (msec) : 100=1.51%, 250=4.90%, 500=4.18% 00:19:06.263 cpu : usr=1.12%, sys=1.86%, ctx=146, majf=0, minf=9 00:19:06.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.263 issued rwts: total=1502,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.263 job1: (groupid=0, jobs=1): err= 0: pid=357899: Mon May 13 02:59:56 2024 00:19:06.263 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:19:06.263 slat (usec): min=2, max=24098, avg=132.03, stdev=964.15 00:19:06.263 clat (usec): min=5878, max=82911, avg=17308.66, stdev=11879.61 00:19:06.263 lat (usec): min=5884, max=82944, avg=17440.69, stdev=11958.04 00:19:06.263 clat percentiles (usec): 00:19:06.263 | 1.00th=[ 5932], 5.00th=[11076], 10.00th=[11207], 20.00th=[11338], 00:19:06.263 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12911], 60.00th=[13566], 00:19:06.263 | 70.00th=[14353], 80.00th=[18220], 90.00th=[30540], 95.00th=[42206], 00:19:06.263 | 99.00th=[67634], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:19:06.263 | 99.99th=[83362] 00:19:06.263 write: IOPS=4066, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:19:06.263 slat (usec): min=3, max=13421, avg=122.86, stdev=734.49 00:19:06.263 clat (usec): min=4518, max=55339, avg=15910.00, stdev=6509.30 00:19:06.263 lat (usec): min=5274, max=55345, avg=16032.86, stdev=6564.27 00:19:06.263 clat percentiles (usec): 00:19:06.263 | 1.00th=[ 6652], 5.00th=[10814], 10.00th=[11338], 20.00th=[11994], 00:19:06.263 | 30.00th=[12256], 40.00th=[13173], 50.00th=[13829], 60.00th=[14615], 00:19:06.263 | 70.00th=[17171], 80.00th=[19530], 90.00th=[21890], 95.00th=[27395], 00:19:06.263 | 99.00th=[47449], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:19:06.263 | 99.99th=[55313] 00:19:06.263 bw ( KiB/s): min=12288, max=19392, per=39.08%, avg=15840.00, stdev=5023.29, samples=2 00:19:06.263 iops : min= 3072, max= 4848, avg=3960.00, stdev=1255.82, samples=2 00:19:06.263 lat (msec) : 10=2.66%, 20=78.32%, 50=16.61%, 100=2.41% 00:19:06.263 cpu : usr=2.49%, sys=4.88%, ctx=448, majf=0, minf=17 00:19:06.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.263 issued rwts: total=3584,4087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.263 job2: (groupid=0, jobs=1): err= 0: pid=357900: Mon May 13 02:59:56 2024 00:19:06.263 read: IOPS=2009, BW=8039KiB/s (8232kB/s)(8192KiB/1019msec) 00:19:06.263 slat (usec): min=2, max=19948, avg=213.45, stdev=1244.15 00:19:06.263 clat (usec): min=7497, max=50071, avg=26550.29, stdev=10059.88 00:19:06.263 lat (usec): min=7504, max=50188, avg=26763.74, stdev=10139.02 00:19:06.263 clat percentiles (usec): 00:19:06.263 | 1.00th=[10159], 5.00th=[12387], 10.00th=[12911], 20.00th=[16057], 00:19:06.263 | 30.00th=[19792], 40.00th=[22676], 50.00th=[27919], 60.00th=[29492], 00:19:06.263 | 70.00th=[33162], 80.00th=[35914], 90.00th=[38536], 95.00th=[44303], 00:19:06.263 | 99.00th=[47449], 99.50th=[48497], 99.90th=[49546], 99.95th=[50070], 00:19:06.263 | 99.99th=[50070] 00:19:06.263 write: IOPS=2287, BW=9150KiB/s (9370kB/s)(9324KiB/1019msec); 0 zone resets 00:19:06.263 slat (usec): min=3, max=13109, avg=234.39, stdev=1058.01 00:19:06.263 clat (usec): min=9924, max=72771, avg=31890.56, stdev=13613.57 00:19:06.263 lat (usec): min=9934, max=72778, avg=32124.94, stdev=13725.69 00:19:06.263 clat percentiles (usec): 00:19:06.263 | 1.00th=[10290], 5.00th=[13304], 10.00th=[15139], 20.00th=[17957], 00:19:06.263 | 30.00th=[22938], 40.00th=[26870], 50.00th=[30540], 60.00th=[34341], 00:19:06.263 | 70.00th=[36439], 80.00th=[44303], 90.00th=[54264], 95.00th=[55313], 00:19:06.263 | 99.00th=[62653], 99.50th=[65799], 99.90th=[70779], 99.95th=[70779], 00:19:06.263 | 99.99th=[72877] 00:19:06.263 bw ( KiB/s): min= 8744, max= 8880, per=21.74%, avg=8812.00, stdev=96.17, samples=2 00:19:06.263 iops : min= 2186, max= 2220, avg=2203.00, stdev=24.04, samples=2 00:19:06.263 lat (msec) : 10=0.75%, 20=26.74%, 50=64.35%, 100=8.15% 00:19:06.263 cpu : usr=2.36%, sys=3.44%, ctx=405, majf=0, minf=19 00:19:06.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.263 issued rwts: total=2048,2331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 job3: (groupid=0, jobs=1): err= 0: pid=357901: Mon May 13 02:59:56 2024 00:19:06.264 read: IOPS=2471, BW=9884KiB/s (10.1MB/s)(10.0MiB/1036msec) 00:19:06.264 slat (usec): min=2, max=19895, avg=152.01, stdev=1069.38 00:19:06.264 clat (usec): min=5131, max=71747, avg=18936.23, stdev=9805.31 00:19:06.264 lat (usec): min=5134, max=75387, avg=19088.24, stdev=9872.61 00:19:06.264 clat percentiles (usec): 00:19:06.264 | 1.00th=[ 5145], 5.00th=[ 8029], 10.00th=[10683], 20.00th=[11600], 00:19:06.264 | 30.00th=[11863], 40.00th=[15008], 50.00th=[16909], 60.00th=[20317], 00:19:06.264 | 70.00th=[21627], 80.00th=[23987], 90.00th=[28967], 95.00th=[40109], 00:19:06.264 | 99.00th=[50594], 99.50th=[62653], 99.90th=[71828], 99.95th=[71828], 00:19:06.264 | 99.99th=[71828] 00:19:06.264 write: IOPS=2845, BW=11.1MiB/s (11.7MB/s)(11.5MiB/1036msec); 0 zone resets 00:19:06.264 slat (usec): min=3, max=11867, avg=191.83, stdev=872.25 00:19:06.264 clat (usec): min=1350, max=90863, avg=28122.29, stdev=16467.77 00:19:06.264 lat (usec): min=2034, max=90868, avg=28314.12, stdev=16578.55 00:19:06.264 clat percentiles (usec): 00:19:06.264 | 1.00th=[ 4146], 5.00th=[ 6390], 10.00th=[ 9241], 20.00th=[12911], 00:19:06.264 | 30.00th=[16581], 40.00th=[21890], 50.00th=[25560], 60.00th=[28443], 00:19:06.264 | 70.00th=[34341], 80.00th=[43779], 90.00th=[51643], 95.00th=[55837], 00:19:06.264 | 99.00th=[80217], 99.50th=[83362], 99.90th=[90702], 99.95th=[90702], 00:19:06.264 | 99.99th=[90702] 00:19:06.264 bw ( KiB/s): min= 9296, max=13272, per=27.84%, avg=11284.00, stdev=2811.46, samples=2 00:19:06.264 iops : min= 2324, max= 3318, avg=2821.00, stdev=702.86, samples=2 00:19:06.264 lat (msec) : 2=0.02%, 4=0.49%, 10=9.33%, 20=36.06%, 50=47.48% 00:19:06.264 lat (msec) : 100=6.63% 00:19:06.264 cpu : usr=2.32%, sys=3.67%, ctx=460, majf=0, minf=5 00:19:06.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.264 issued rwts: total=2560,2948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 00:19:06.264 Run status group 0 (all jobs): 00:19:06.264 READ: bw=35.2MiB/s (36.9MB/s), 5584KiB/s-13.9MiB/s (5718kB/s-14.6MB/s), io=37.9MiB (39.7MB), run=1005-1076msec 00:19:06.264 WRITE: bw=39.6MiB/s (41.5MB/s), 5710KiB/s-15.9MiB/s (5847kB/s-16.7MB/s), io=42.6MiB (44.7MB), run=1005-1076msec 00:19:06.264 00:19:06.264 Disk stats (read/write): 00:19:06.264 nvme0n1: ios=1414/1536, merge=0/0, ticks=30262/34302, in_queue=64564, util=98.30% 00:19:06.264 nvme0n2: ios=3112/3096, merge=0/0, ticks=25164/24237, in_queue=49401, util=98.07% 00:19:06.264 nvme0n3: ios=1738/2048, merge=0/0, ticks=16648/18667, in_queue=35315, util=99.48% 00:19:06.264 nvme0n4: ios=2105/2542, merge=0/0, ticks=32550/56034, in_queue=88584, util=97.90% 00:19:06.264 02:59:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:06.264 [global] 00:19:06.264 thread=1 00:19:06.264 invalidate=1 00:19:06.264 rw=randwrite 00:19:06.264 time_based=1 00:19:06.264 runtime=1 00:19:06.264 ioengine=libaio 00:19:06.264 direct=1 00:19:06.264 bs=4096 00:19:06.264 iodepth=128 00:19:06.264 norandommap=0 00:19:06.264 numjobs=1 00:19:06.264 00:19:06.264 verify_dump=1 00:19:06.264 verify_backlog=512 00:19:06.264 verify_state_save=0 00:19:06.264 do_verify=1 00:19:06.264 verify=crc32c-intel 00:19:06.264 [job0] 00:19:06.264 filename=/dev/nvme0n1 00:19:06.264 [job1] 00:19:06.264 filename=/dev/nvme0n2 00:19:06.264 [job2] 00:19:06.264 filename=/dev/nvme0n3 00:19:06.264 [job3] 00:19:06.264 filename=/dev/nvme0n4 00:19:06.264 Could not set queue depth (nvme0n1) 00:19:06.264 Could not set queue depth (nvme0n2) 00:19:06.264 Could not set queue depth (nvme0n3) 00:19:06.264 Could not set queue depth (nvme0n4) 00:19:06.523 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.523 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.523 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.523 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.523 fio-3.35 00:19:06.523 Starting 4 threads 00:19:07.911 00:19:07.911 job0: (groupid=0, jobs=1): err= 0: pid=358127: Mon May 13 02:59:58 2024 00:19:07.911 read: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec) 00:19:07.911 slat (usec): min=3, max=220921, avg=737.83, stdev=8503.59 00:19:07.911 clat (msec): min=13, max=264, avg=85.66, stdev=75.58 00:19:07.911 lat (msec): min=15, max=265, avg=86.40, stdev=75.67 00:19:07.911 clat percentiles (msec): 00:19:07.911 | 1.00th=[ 16], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:19:07.911 | 30.00th=[ 21], 40.00th=[ 37], 50.00th=[ 93], 60.00th=[ 95], 00:19:07.911 | 70.00th=[ 116], 80.00th=[ 123], 90.00th=[ 249], 95.00th=[ 259], 00:19:07.911 | 99.00th=[ 266], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:19:07.911 | 99.99th=[ 266] 00:19:07.911 write: IOPS=1300, BW=5202KiB/s (5327kB/s)(5280KiB/1015msec); 0 zone resets 00:19:07.911 slat (usec): min=4, max=73581, avg=188.40, stdev=2148.37 00:19:07.911 clat (usec): min=330, max=118816, avg=29782.37, stdev=28460.28 00:19:07.911 lat (msec): min=12, max=119, avg=29.97, stdev=28.45 00:19:07.911 clat percentiles (msec): 00:19:07.911 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 16], 00:19:07.911 | 30.00th=[ 17], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 18], 00:19:07.911 | 70.00th=[ 20], 80.00th=[ 28], 90.00th=[ 83], 95.00th=[ 95], 00:19:07.911 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 120], 00:19:07.911 | 99.99th=[ 120] 00:19:07.911 bw ( KiB/s): min= 4096, max= 5440, per=9.78%, avg=4768.00, stdev=950.35, samples=2 00:19:07.911 iops : min= 1024, max= 1360, avg=1192.00, stdev=237.59, samples=2 00:19:07.911 lat (usec) : 500=0.04% 00:19:07.911 lat (msec) : 20=53.71%, 50=12.37%, 100=16.51%, 250=13.35%, 500=4.01% 00:19:07.911 cpu : usr=1.38%, sys=1.68%, ctx=107, majf=0, minf=21 00:19:07.911 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:07.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:07.911 issued rwts: total=1024,1320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:07.911 job1: (groupid=0, jobs=1): err= 0: pid=358128: Mon May 13 02:59:58 2024 00:19:07.911 read: IOPS=3600, BW=14.1MiB/s (14.7MB/s)(14.1MiB/1004msec) 00:19:07.911 slat (usec): min=3, max=12217, avg=97.59, stdev=541.94 00:19:07.911 clat (usec): min=2268, max=34655, avg=11438.52, stdev=4569.88 00:19:07.911 lat (usec): min=3388, max=34662, avg=11536.10, stdev=4617.45 00:19:07.911 clat percentiles (usec): 00:19:07.911 | 1.00th=[ 5473], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8291], 00:19:07.911 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[11469], 00:19:07.911 | 70.00th=[12256], 80.00th=[12387], 90.00th=[16712], 95.00th=[20055], 00:19:07.911 | 99.00th=[31065], 99.50th=[31851], 99.90th=[34866], 99.95th=[34866], 00:19:07.911 | 99.99th=[34866] 00:19:07.911 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:07.911 slat (usec): min=3, max=6341, avg=148.09, stdev=472.65 00:19:07.911 clat (usec): min=3612, max=38037, avg=20896.36, stdev=6185.17 00:19:07.911 lat (usec): min=3619, max=39189, avg=21044.45, stdev=6229.26 00:19:07.911 clat percentiles (usec): 00:19:07.911 | 1.00th=[ 7701], 5.00th=[ 9634], 10.00th=[12125], 20.00th=[14484], 00:19:07.911 | 30.00th=[15926], 40.00th=[20055], 50.00th=[23200], 60.00th=[25035], 00:19:07.911 | 70.00th=[25822], 80.00th=[26346], 90.00th=[26608], 95.00th=[27395], 00:19:07.911 | 99.00th=[32375], 99.50th=[34866], 99.90th=[38011], 99.95th=[38011], 00:19:07.911 | 99.99th=[38011] 00:19:07.911 bw ( KiB/s): min=15608, max=16384, per=32.81%, avg=15996.00, stdev=548.71, samples=2 00:19:07.911 iops : min= 3902, max= 4096, avg=3999.00, stdev=137.18, samples=2 00:19:07.911 lat (msec) : 4=0.56%, 10=23.78%, 20=41.15%, 50=34.51% 00:19:07.911 cpu : usr=6.08%, sys=6.98%, ctx=640, majf=0, minf=9 00:19:07.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:07.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:07.911 issued rwts: total=3615,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:07.911 job2: (groupid=0, jobs=1): err= 0: pid=358129: Mon May 13 02:59:58 2024 00:19:07.911 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:19:07.911 slat (usec): min=2, max=11137, avg=109.36, stdev=618.46 00:19:07.911 clat (usec): min=6683, max=29054, avg=13216.39, stdev=4087.58 00:19:07.911 lat (usec): min=6692, max=29072, avg=13325.75, stdev=4128.24 00:19:07.911 clat percentiles (usec): 00:19:07.911 | 1.00th=[ 6783], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10290], 00:19:07.911 | 30.00th=[11338], 40.00th=[11863], 50.00th=[11994], 60.00th=[12649], 00:19:07.911 | 70.00th=[13304], 80.00th=[14877], 90.00th=[19268], 95.00th=[23200], 00:19:07.911 | 99.00th=[27132], 99.50th=[27657], 99.90th=[28967], 99.95th=[28967], 00:19:07.911 | 99.99th=[28967] 00:19:07.911 write: IOPS=4470, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1002msec); 0 zone resets 00:19:07.911 slat (usec): min=3, max=51731, avg=113.76, stdev=893.81 00:19:07.911 clat (usec): min=1009, max=62490, avg=16279.70, stdev=9890.19 00:19:07.911 lat (usec): min=1014, max=64331, avg=16393.45, stdev=9939.25 00:19:07.911 clat percentiles (usec): 00:19:07.911 | 1.00th=[ 3818], 5.00th=[ 5997], 10.00th=[ 6915], 20.00th=[ 9896], 00:19:07.911 | 30.00th=[11469], 40.00th=[12387], 50.00th=[13960], 60.00th=[14615], 00:19:07.911 | 70.00th=[18220], 80.00th=[23462], 90.00th=[26346], 95.00th=[27657], 00:19:07.911 | 99.00th=[58983], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:19:07.911 | 99.99th=[62653] 00:19:07.911 bw ( KiB/s): min=16656, max=18160, per=35.70%, avg=17408.00, stdev=1063.49, samples=2 00:19:07.911 iops : min= 4164, max= 4540, avg=4352.00, stdev=265.87, samples=2 00:19:07.911 lat (msec) : 2=0.09%, 4=1.15%, 10=17.61%, 20=63.01%, 50=16.65% 00:19:07.911 lat (msec) : 100=1.48% 00:19:07.911 cpu : usr=5.29%, sys=6.49%, ctx=493, majf=0, minf=7 00:19:07.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:07.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:07.912 issued rwts: total=4096,4479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:07.912 job3: (groupid=0, jobs=1): err= 0: pid=358130: Mon May 13 02:59:58 2024 00:19:07.912 read: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec) 00:19:07.912 slat (usec): min=2, max=99282, avg=228.79, stdev=3073.54 00:19:07.912 clat (msec): min=5, max=127, avg=33.56, stdev=38.81 00:19:07.912 lat (msec): min=5, max=127, avg=33.78, stdev=38.94 00:19:07.912 clat percentiles (msec): 00:19:07.912 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:19:07.912 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:19:07.912 | 70.00th=[ 21], 80.00th=[ 33], 90.00th=[ 116], 95.00th=[ 123], 00:19:07.912 | 99.00th=[ 127], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 128], 00:19:07.912 | 99.99th=[ 128] 00:19:07.912 write: IOPS=2443, BW=9775KiB/s (10.0MB/s)(9912KiB/1014msec); 0 zone resets 00:19:07.912 slat (usec): min=3, max=93692, avg=203.22, stdev=2427.81 00:19:07.912 clat (msec): min=2, max=112, avg=23.59, stdev=22.43 00:19:07.912 lat (msec): min=2, max=115, avg=23.79, stdev=22.58 00:19:07.912 clat percentiles (msec): 00:19:07.912 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:19:07.912 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 20], 00:19:07.912 | 70.00th=[ 23], 80.00th=[ 26], 90.00th=[ 36], 95.00th=[ 105], 00:19:07.912 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 113], 99.95th=[ 113], 00:19:07.912 | 99.99th=[ 113] 00:19:07.912 bw ( KiB/s): min= 5272, max=13528, per=19.28%, avg=9400.00, stdev=5837.87, samples=2 00:19:07.912 iops : min= 1318, max= 3382, avg=2350.00, stdev=1459.47, samples=2 00:19:07.912 lat (msec) : 4=0.64%, 10=4.00%, 20=61.22%, 50=22.89%, 100=0.73% 00:19:07.912 lat (msec) : 250=10.52% 00:19:07.912 cpu : usr=2.07%, sys=3.36%, ctx=353, majf=0, minf=13 00:19:07.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:07.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:07.912 issued rwts: total=2048,2478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:07.912 00:19:07.912 Run status group 0 (all jobs): 00:19:07.912 READ: bw=41.5MiB/s (43.5MB/s), 4035KiB/s-16.0MiB/s (4132kB/s-16.7MB/s), io=42.1MiB (44.2MB), run=1002-1015msec 00:19:07.912 WRITE: bw=47.6MiB/s (49.9MB/s), 5202KiB/s-17.5MiB/s (5327kB/s-18.3MB/s), io=48.3MiB (50.7MB), run=1002-1015msec 00:19:07.912 00:19:07.912 Disk stats (read/write): 00:19:07.912 nvme0n1: ios=910/1024, merge=0/0, ticks=21588/4417, in_queue=26005, util=88.38% 00:19:07.912 nvme0n2: ios=3112/3389, merge=0/0, ticks=16742/31275, in_queue=48017, util=96.95% 00:19:07.912 nvme0n3: ios=3641/3612, merge=0/0, ticks=41473/50797, in_queue=92270, util=91.76% 00:19:07.912 nvme0n4: ios=1559/1809, merge=0/0, ticks=20858/20622, in_queue=41480, util=98.00% 00:19:07.912 02:59:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:07.912 02:59:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=358271 00:19:07.912 02:59:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:07.912 02:59:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:07.912 [global] 00:19:07.912 thread=1 00:19:07.912 invalidate=1 00:19:07.912 rw=read 00:19:07.912 time_based=1 00:19:07.912 runtime=10 00:19:07.912 ioengine=libaio 00:19:07.912 direct=1 00:19:07.912 bs=4096 00:19:07.912 iodepth=1 00:19:07.912 norandommap=1 00:19:07.912 numjobs=1 00:19:07.912 00:19:07.912 [job0] 00:19:07.912 filename=/dev/nvme0n1 00:19:07.912 [job1] 00:19:07.912 filename=/dev/nvme0n2 00:19:07.912 [job2] 00:19:07.912 filename=/dev/nvme0n3 00:19:07.912 [job3] 00:19:07.912 filename=/dev/nvme0n4 00:19:07.912 Could not set queue depth (nvme0n1) 00:19:07.912 Could not set queue depth (nvme0n2) 00:19:07.912 Could not set queue depth (nvme0n3) 00:19:07.912 Could not set queue depth (nvme0n4) 00:19:07.912 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.912 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.912 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.912 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.912 fio-3.35 00:19:07.912 Starting 4 threads 00:19:11.190 03:00:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:11.190 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8519680, buflen=4096 00:19:11.190 fio: pid=358480, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:11.190 03:00:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:11.190 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=319488, buflen=4096 00:19:11.190 fio: pid=358479, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:11.190 03:00:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:11.190 03:00:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:11.448 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2908160, buflen=4096 00:19:11.448 fio: pid=358477, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:11.448 03:00:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:11.448 03:00:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:11.706 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5496832, buflen=4096 00:19:11.706 fio: pid=358478, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:11.706 03:00:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:11.706 03:00:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:11.706 00:19:11.706 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=358477: Mon May 13 03:00:02 2024 00:19:11.706 read: IOPS=207, BW=830KiB/s (850kB/s)(2840KiB/3420msec) 00:19:11.706 slat (usec): min=5, max=15189, avg=46.13, stdev=655.31 00:19:11.706 clat (usec): min=434, max=45139, avg=4766.41, stdev=12311.45 00:19:11.706 lat (usec): min=441, max=57337, avg=4812.56, stdev=12393.07 00:19:11.706 clat percentiles (usec): 00:19:11.706 | 1.00th=[ 506], 5.00th=[ 523], 10.00th=[ 529], 20.00th=[ 537], 00:19:11.706 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 652], 00:19:11.706 | 70.00th=[ 685], 80.00th=[ 701], 90.00th=[40633], 95.00th=[41157], 00:19:11.706 | 99.00th=[41157], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:19:11.706 | 99.99th=[45351] 00:19:11.706 bw ( KiB/s): min= 96, max= 3824, per=20.12%, avg=921.33, stdev=1502.64, samples=6 00:19:11.706 iops : min= 24, max= 956, avg=230.33, stdev=375.66, samples=6 00:19:11.706 lat (usec) : 500=0.42%, 750=88.47%, 1000=0.56% 00:19:11.706 lat (msec) : 2=0.14%, 50=10.27% 00:19:11.706 cpu : usr=0.18%, sys=0.35%, ctx=717, majf=0, minf=1 00:19:11.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.706 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.706 issued rwts: total=711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.706 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=358478: Mon May 13 03:00:02 2024 00:19:11.706 read: IOPS=365, BW=1459KiB/s (1495kB/s)(5368KiB/3678msec) 00:19:11.706 slat (usec): min=5, max=14146, avg=55.29, stdev=722.80 00:19:11.706 clat (usec): min=490, max=45023, avg=2682.43, stdev=9070.02 00:19:11.706 lat (usec): min=496, max=45040, avg=2729.31, stdev=9088.37 00:19:11.706 clat percentiles (usec): 00:19:11.706 | 1.00th=[ 498], 5.00th=[ 506], 10.00th=[ 515], 20.00th=[ 529], 00:19:11.706 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 562], 00:19:11.706 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[40633], 00:19:11.706 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[44827], 00:19:11.706 | 99.99th=[44827] 00:19:11.706 bw ( KiB/s): min= 96, max= 6560, per=26.93%, avg=1233.14, stdev=2404.77, samples=7 00:19:11.706 iops : min= 24, max= 1640, avg=308.29, stdev=601.19, samples=7 00:19:11.706 lat (usec) : 500=2.16%, 750=92.03%, 1000=0.45% 00:19:11.706 lat (msec) : 2=0.07%, 50=5.21% 00:19:11.706 cpu : usr=0.22%, sys=0.44%, ctx=1350, majf=0, minf=1 00:19:11.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.706 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.706 issued rwts: total=1343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.706 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=358479: Mon May 13 03:00:02 2024 00:19:11.706 read: IOPS=25, BW=98.9KiB/s (101kB/s)(312KiB/3156msec) 00:19:11.706 slat (nsec): min=12818, max=47663, avg=21121.97, stdev=8758.32 00:19:11.706 clat (usec): min=490, max=42941, avg=40423.18, stdev=6521.28 00:19:11.706 lat (usec): min=524, max=42960, avg=40444.36, stdev=6520.22 00:19:11.706 clat percentiles (usec): 00:19:11.706 | 1.00th=[ 490], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:11.706 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:19:11.706 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:11.706 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:11.706 | 99.99th=[42730] 00:19:11.706 bw ( KiB/s): min= 96, max= 112, per=2.14%, avg=98.67, stdev= 6.53, samples=6 00:19:11.706 iops : min= 24, max= 28, avg=24.67, stdev= 1.63, samples=6 00:19:11.706 lat (usec) : 500=1.27%, 1000=1.27% 00:19:11.706 lat (msec) : 50=96.20% 00:19:11.706 cpu : usr=0.00%, sys=0.06%, ctx=82, majf=0, minf=1 00:19:11.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.706 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.706 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.706 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=358480: Mon May 13 03:00:02 2024 00:19:11.706 read: IOPS=719, BW=2876KiB/s (2945kB/s)(8320KiB/2893msec) 00:19:11.706 slat (nsec): min=5397, max=59761, avg=11295.32, stdev=6372.92 00:19:11.706 clat (usec): min=381, max=45026, avg=1376.33, stdev=6015.96 00:19:11.706 lat (usec): min=387, max=45043, avg=1387.61, stdev=6017.64 00:19:11.706 clat percentiles (usec): 00:19:11.706 | 1.00th=[ 392], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 416], 00:19:11.706 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 453], 00:19:11.706 | 70.00th=[ 469], 80.00th=[ 498], 90.00th=[ 668], 95.00th=[ 709], 00:19:11.706 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:19:11.706 | 99.99th=[44827] 00:19:11.706 bw ( KiB/s): min= 104, max= 8080, per=71.14%, avg=3257.60, stdev=4289.95, samples=5 00:19:11.706 iops : min= 26, max= 2020, avg=814.40, stdev=1072.49, samples=5 00:19:11.706 lat (usec) : 500=80.15%, 750=16.82%, 1000=0.72% 00:19:11.706 lat (msec) : 4=0.05%, 50=2.21% 00:19:11.706 cpu : usr=0.55%, sys=1.24%, ctx=2081, majf=0, minf=1 00:19:11.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.706 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.706 issued rwts: total=2081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.706 00:19:11.706 Run status group 0 (all jobs): 00:19:11.706 READ: bw=4579KiB/s (4688kB/s), 98.9KiB/s-2876KiB/s (101kB/s-2945kB/s), io=16.4MiB (17.2MB), run=2893-3678msec 00:19:11.706 00:19:11.706 Disk stats (read/write): 00:19:11.706 nvme0n1: ios=752/0, merge=0/0, ticks=4437/0, in_queue=4437, util=98.63% 00:19:11.706 nvme0n2: ios=1289/0, merge=0/0, ticks=4729/0, in_queue=4729, util=98.04% 00:19:11.706 nvme0n3: ios=121/0, merge=0/0, ticks=4188/0, in_queue=4188, util=99.13% 00:19:11.706 nvme0n4: ios=2079/0, merge=0/0, ticks=2811/0, in_queue=2811, util=96.75% 00:19:12.011 03:00:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:12.011 03:00:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:12.289 03:00:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:12.289 03:00:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:12.547 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:12.547 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:12.805 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:12.805 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:13.066 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:13.066 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 358271 00:19:13.066 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:13.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:13.067 nvmf hotplug test: fio failed as expected 00:19:13.067 03:00:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.326 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:13.326 rmmod nvme_tcp 00:19:13.326 rmmod nvme_fabrics 00:19:13.585 rmmod nvme_keyring 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 356364 ']' 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 356364 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 356364 ']' 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 356364 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 356364 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 356364' 00:19:13.585 killing process with pid 356364 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 356364 00:19:13.585 [2024-05-13 03:00:04.181036] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:13.585 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 356364 00:19:13.843 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:13.843 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:13.843 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:13.843 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:13.843 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:13.843 03:00:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.843 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.843 03:00:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.749 03:00:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:15.749 00:19:15.749 real 0m23.392s 00:19:15.749 user 1m19.836s 00:19:15.749 sys 0m6.292s 00:19:15.749 03:00:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:15.749 03:00:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.749 ************************************ 00:19:15.749 END TEST nvmf_fio_target 00:19:15.749 ************************************ 00:19:15.749 03:00:06 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:15.749 03:00:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:15.749 03:00:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:15.749 03:00:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:15.749 ************************************ 00:19:15.749 START TEST nvmf_bdevio 00:19:15.749 ************************************ 00:19:15.749 03:00:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:16.007 * Looking for test storage... 00:19:16.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.007 03:00:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:17.907 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:17.907 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.907 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:17.908 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:17.908 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:17.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:19:17.908 00:19:17.908 --- 10.0.0.2 ping statistics --- 00:19:17.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.908 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:19:17.908 00:19:17.908 --- 10.0.0.1 ping statistics --- 00:19:17.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.908 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=361533 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 361533 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 361533 ']' 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:17.908 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.908 [2024-05-13 03:00:08.646544] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:17.908 [2024-05-13 03:00:08.646648] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.908 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.908 [2024-05-13 03:00:08.687469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:18.166 [2024-05-13 03:00:08.720602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.166 [2024-05-13 03:00:08.814164] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.166 [2024-05-13 03:00:08.814227] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.166 [2024-05-13 03:00:08.814250] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.166 [2024-05-13 03:00:08.814263] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.166 [2024-05-13 03:00:08.814275] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.166 [2024-05-13 03:00:08.814376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:18.166 [2024-05-13 03:00:08.814434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:18.166 [2024-05-13 03:00:08.814500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:18.166 [2024-05-13 03:00:08.814503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.166 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:18.166 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:18.166 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.166 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.166 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.166 03:00:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.166 03:00:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:18.166 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.166 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.166 [2024-05-13 03:00:08.961255] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.424 Malloc0 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.424 03:00:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.424 [2024-05-13 03:00:09.012058] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:18.424 [2024-05-13 03:00:09.012343] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.424 { 00:19:18.424 "params": { 00:19:18.424 "name": "Nvme$subsystem", 00:19:18.424 "trtype": "$TEST_TRANSPORT", 00:19:18.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.424 "adrfam": "ipv4", 00:19:18.424 "trsvcid": "$NVMF_PORT", 00:19:18.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.424 "hdgst": ${hdgst:-false}, 00:19:18.424 "ddgst": ${ddgst:-false} 00:19:18.424 }, 00:19:18.424 "method": "bdev_nvme_attach_controller" 00:19:18.424 } 00:19:18.424 EOF 00:19:18.424 )") 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:18.424 03:00:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:18.424 "params": { 00:19:18.424 "name": "Nvme1", 00:19:18.424 "trtype": "tcp", 00:19:18.424 "traddr": "10.0.0.2", 00:19:18.424 "adrfam": "ipv4", 00:19:18.424 "trsvcid": "4420", 00:19:18.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.424 "hdgst": false, 00:19:18.424 "ddgst": false 00:19:18.424 }, 00:19:18.424 "method": "bdev_nvme_attach_controller" 00:19:18.424 }' 00:19:18.424 [2024-05-13 03:00:09.055178] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:18.424 [2024-05-13 03:00:09.055248] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361730 ] 00:19:18.424 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.424 [2024-05-13 03:00:09.088837] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:18.424 [2024-05-13 03:00:09.117868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:18.424 [2024-05-13 03:00:09.206990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.424 [2024-05-13 03:00:09.207038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.424 [2024-05-13 03:00:09.207041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.681 I/O targets: 00:19:18.681 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:18.681 00:19:18.681 00:19:18.681 CUnit - A unit testing framework for C - Version 2.1-3 00:19:18.681 http://cunit.sourceforge.net/ 00:19:18.681 00:19:18.681 00:19:18.681 Suite: bdevio tests on: Nvme1n1 00:19:18.681 Test: blockdev write read block ...passed 00:19:18.939 Test: blockdev write zeroes read block ...passed 00:19:18.939 Test: blockdev write zeroes read no split ...passed 00:19:18.939 Test: blockdev write zeroes read split ...passed 00:19:18.939 Test: blockdev write zeroes read split partial ...passed 00:19:18.939 Test: blockdev reset ...[2024-05-13 03:00:09.643623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:18.939 [2024-05-13 03:00:09.643861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb41200 (9): Bad file descriptor 00:19:18.939 [2024-05-13 03:00:09.664540] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:18.939 passed 00:19:18.939 Test: blockdev write read 8 blocks ...passed 00:19:18.939 Test: blockdev write read size > 128k ...passed 00:19:18.939 Test: blockdev write read invalid size ...passed 00:19:18.939 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:18.939 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:18.939 Test: blockdev write read max offset ...passed 00:19:19.197 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:19.197 Test: blockdev writev readv 8 blocks ...passed 00:19:19.197 Test: blockdev writev readv 30 x 1block ...passed 00:19:19.197 Test: blockdev writev readv block ...passed 00:19:19.197 Test: blockdev writev readv size > 128k ...passed 00:19:19.197 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:19.197 Test: blockdev comparev and writev ...[2024-05-13 03:00:09.845467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.197 [2024-05-13 03:00:09.845503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.845526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.197 [2024-05-13 03:00:09.845543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.846049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.197 [2024-05-13 03:00:09.846073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.846095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.197 [2024-05-13 03:00:09.846110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.846564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.197 [2024-05-13 03:00:09.846594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.846615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.197 [2024-05-13 03:00:09.846631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.847101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.197 [2024-05-13 03:00:09.847125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.847145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.197 [2024-05-13 03:00:09.847160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:19.197 passed 00:19:19.197 Test: blockdev nvme passthru rw ...passed 00:19:19.197 Test: blockdev nvme passthru vendor specific ...[2024-05-13 03:00:09.930215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.197 [2024-05-13 03:00:09.930241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.930556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.197 [2024-05-13 03:00:09.930580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.930892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.197 [2024-05-13 03:00:09.930915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:19.197 [2024-05-13 03:00:09.931232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.197 [2024-05-13 03:00:09.931255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:19.197 passed 00:19:19.197 Test: blockdev nvme admin passthru ...passed 00:19:19.197 Test: blockdev copy ...passed 00:19:19.197 00:19:19.197 Run Summary: Type Total Ran Passed Failed Inactive 00:19:19.197 suites 1 1 n/a 0 0 00:19:19.197 tests 23 23 23 0 0 00:19:19.197 asserts 152 152 152 0 n/a 00:19:19.197 00:19:19.197 Elapsed time = 1.091 seconds 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.454 rmmod nvme_tcp 00:19:19.454 rmmod nvme_fabrics 00:19:19.454 rmmod nvme_keyring 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 361533 ']' 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 361533 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 361533 ']' 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 361533 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:19.454 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 361533 00:19:19.712 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:19.712 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:19.712 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 361533' 00:19:19.712 killing process with pid 361533 00:19:19.712 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 361533 00:19:19.712 [2024-05-13 03:00:10.282005] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:19.712 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 361533 00:19:19.970 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:19.970 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:19.970 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:19.970 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.970 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.970 03:00:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.970 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.970 03:00:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.872 03:00:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.872 00:19:21.872 real 0m6.086s 00:19:21.872 user 0m9.596s 00:19:21.872 sys 0m2.012s 00:19:21.872 03:00:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:21.872 03:00:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.872 ************************************ 00:19:21.872 END TEST nvmf_bdevio 00:19:21.872 ************************************ 00:19:21.872 03:00:12 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:19:21.872 03:00:12 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:21.872 03:00:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:21.872 03:00:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:21.872 03:00:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:21.872 ************************************ 00:19:21.872 START TEST nvmf_bdevio_no_huge 00:19:21.872 ************************************ 00:19:21.872 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:22.129 * Looking for test storage... 00:19:22.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.129 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.129 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:22.129 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.129 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.129 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.129 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.129 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:22.130 03:00:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:24.033 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:24.033 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:24.033 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:24.033 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.033 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:19:24.034 00:19:24.034 --- 10.0.0.2 ping statistics --- 00:19:24.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.034 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:19:24.034 00:19:24.034 --- 10.0.0.1 ping statistics --- 00:19:24.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.034 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=363807 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 363807 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 363807 ']' 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:24.034 03:00:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.034 [2024-05-13 03:00:14.811544] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:24.034 [2024-05-13 03:00:14.811636] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:24.292 [2024-05-13 03:00:14.861639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:24.292 [2024-05-13 03:00:14.883936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.292 [2024-05-13 03:00:14.972126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.292 [2024-05-13 03:00:14.972184] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.292 [2024-05-13 03:00:14.972199] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.292 [2024-05-13 03:00:14.972213] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.292 [2024-05-13 03:00:14.972225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.292 [2024-05-13 03:00:14.972314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:24.292 [2024-05-13 03:00:14.972373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:24.292 [2024-05-13 03:00:14.972438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:24.292 [2024-05-13 03:00:14.972440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.292 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:24.292 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:19:24.292 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.292 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.292 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.550 [2024-05-13 03:00:15.103029] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.550 Malloc0 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.550 [2024-05-13 03:00:15.141163] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:24.550 [2024-05-13 03:00:15.141467] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:24.550 { 00:19:24.550 "params": { 00:19:24.550 "name": "Nvme$subsystem", 00:19:24.550 "trtype": "$TEST_TRANSPORT", 00:19:24.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.550 "adrfam": "ipv4", 00:19:24.550 "trsvcid": "$NVMF_PORT", 00:19:24.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.550 "hdgst": ${hdgst:-false}, 00:19:24.550 "ddgst": ${ddgst:-false} 00:19:24.550 }, 00:19:24.550 "method": "bdev_nvme_attach_controller" 00:19:24.550 } 00:19:24.550 EOF 00:19:24.550 )") 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:24.550 03:00:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:24.550 "params": { 00:19:24.550 "name": "Nvme1", 00:19:24.550 "trtype": "tcp", 00:19:24.550 "traddr": "10.0.0.2", 00:19:24.550 "adrfam": "ipv4", 00:19:24.550 "trsvcid": "4420", 00:19:24.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.550 "hdgst": false, 00:19:24.550 "ddgst": false 00:19:24.550 }, 00:19:24.550 "method": "bdev_nvme_attach_controller" 00:19:24.550 }' 00:19:24.550 [2024-05-13 03:00:15.187961] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:24.550 [2024-05-13 03:00:15.188047] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid363830 ] 00:19:24.550 [2024-05-13 03:00:15.227364] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:24.550 [2024-05-13 03:00:15.247235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.550 [2024-05-13 03:00:15.332271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.550 [2024-05-13 03:00:15.332319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.550 [2024-05-13 03:00:15.332322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.808 I/O targets: 00:19:24.808 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:24.808 00:19:24.808 00:19:24.808 CUnit - A unit testing framework for C - Version 2.1-3 00:19:24.808 http://cunit.sourceforge.net/ 00:19:24.808 00:19:24.808 00:19:24.808 Suite: bdevio tests on: Nvme1n1 00:19:24.808 Test: blockdev write read block ...passed 00:19:24.808 Test: blockdev write zeroes read block ...passed 00:19:24.808 Test: blockdev write zeroes read no split ...passed 00:19:25.066 Test: blockdev write zeroes read split ...passed 00:19:25.066 Test: blockdev write zeroes read split partial ...passed 00:19:25.066 Test: blockdev reset ...[2024-05-13 03:00:15.678329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.066 [2024-05-13 03:00:15.678434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0f640 (9): Bad file descriptor 00:19:25.066 [2024-05-13 03:00:15.775681] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.066 passed 00:19:25.066 Test: blockdev write read 8 blocks ...passed 00:19:25.066 Test: blockdev write read size > 128k ...passed 00:19:25.066 Test: blockdev write read invalid size ...passed 00:19:25.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.066 Test: blockdev write read max offset ...passed 00:19:25.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.324 Test: blockdev writev readv 8 blocks ...passed 00:19:25.324 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.324 Test: blockdev writev readv block ...passed 00:19:25.324 Test: blockdev writev readv size > 128k ...passed 00:19:25.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.324 Test: blockdev comparev and writev ...[2024-05-13 03:00:16.034776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.324 [2024-05-13 03:00:16.034813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.034837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.324 [2024-05-13 03:00:16.034853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.035301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.324 [2024-05-13 03:00:16.035326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.035347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.324 [2024-05-13 03:00:16.035363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.035836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.324 [2024-05-13 03:00:16.035860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.035882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.324 [2024-05-13 03:00:16.035897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.036318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.324 [2024-05-13 03:00:16.036341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.036368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.324 [2024-05-13 03:00:16.036385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.324 passed 00:19:25.324 Test: blockdev nvme passthru rw ...passed 00:19:25.324 Test: blockdev nvme passthru vendor specific ...[2024-05-13 03:00:16.119153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.324 [2024-05-13 03:00:16.119180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.119447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.324 [2024-05-13 03:00:16.119470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.119721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.324 [2024-05-13 03:00:16.119744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.324 [2024-05-13 03:00:16.119995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.324 [2024-05-13 03:00:16.120018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.324 passed 00:19:25.583 Test: blockdev nvme admin passthru ...passed 00:19:25.583 Test: blockdev copy ...passed 00:19:25.583 00:19:25.583 Run Summary: Type Total Ran Passed Failed Inactive 00:19:25.583 suites 1 1 n/a 0 0 00:19:25.583 tests 23 23 23 0 0 00:19:25.583 asserts 152 152 152 0 n/a 00:19:25.583 00:19:25.583 Elapsed time = 1.352 seconds 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.841 rmmod nvme_tcp 00:19:25.841 rmmod nvme_fabrics 00:19:25.841 rmmod nvme_keyring 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 363807 ']' 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 363807 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 363807 ']' 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 363807 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 363807 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 363807' 00:19:25.841 killing process with pid 363807 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 363807 00:19:25.841 [2024-05-13 03:00:16.569949] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:25.841 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 363807 00:19:26.408 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:26.409 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:26.409 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:26.409 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.409 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.409 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.409 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.409 03:00:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.364 03:00:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.364 00:19:28.364 real 0m6.350s 00:19:28.364 user 0m10.391s 00:19:28.364 sys 0m2.402s 00:19:28.364 03:00:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:28.364 03:00:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:28.364 ************************************ 00:19:28.364 END TEST nvmf_bdevio_no_huge 00:19:28.364 ************************************ 00:19:28.364 03:00:19 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:28.364 03:00:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:28.364 03:00:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:28.364 03:00:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.364 ************************************ 00:19:28.364 START TEST nvmf_tls 00:19:28.364 ************************************ 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:28.364 * Looking for test storage... 00:19:28.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.364 03:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:28.365 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.365 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.365 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.365 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.365 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.365 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.365 03:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.365 03:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.630 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.630 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.630 03:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.630 03:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:30.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:30.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:30.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:30.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.540 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:19:30.541 00:19:30.541 --- 10.0.0.2 ping statistics --- 00:19:30.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.541 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:19:30.541 00:19:30.541 --- 10.0.0.1 ping statistics --- 00:19:30.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.541 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=366020 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 366020 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 366020 ']' 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:30.541 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.541 [2024-05-13 03:00:21.320376] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:30.541 [2024-05-13 03:00:21.320467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.800 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.800 [2024-05-13 03:00:21.361111] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:30.800 [2024-05-13 03:00:21.387360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.800 [2024-05-13 03:00:21.473857] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.800 [2024-05-13 03:00:21.473912] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.800 [2024-05-13 03:00:21.473941] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.800 [2024-05-13 03:00:21.473953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.800 [2024-05-13 03:00:21.473962] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.800 [2024-05-13 03:00:21.474010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.800 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:30.800 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:30.800 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.800 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.800 03:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.800 03:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.800 03:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:30.800 03:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:31.057 true 00:19:31.058 03:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.058 03:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:31.315 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:31.315 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:31.315 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:31.576 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.576 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:31.836 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:31.836 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:31.836 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:32.402 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.402 03:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:32.402 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:32.402 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:32.402 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.402 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:32.660 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:32.660 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:32.660 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:32.917 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.917 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:33.174 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:33.174 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:33.174 03:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:33.432 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.432 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.O3N3s9J468 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.jzpu8fuOsx 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.O3N3s9J468 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.jzpu8fuOsx 00:19:33.690 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:33.948 03:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:34.514 03:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.O3N3s9J468 00:19:34.514 03:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.O3N3s9J468 00:19:34.514 03:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:34.771 [2024-05-13 03:00:25.382845] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.771 03:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.028 03:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.287 [2024-05-13 03:00:25.912227] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:35.287 [2024-05-13 03:00:25.912343] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.287 [2024-05-13 03:00:25.912534] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.287 03:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.544 malloc0 00:19:35.544 03:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.801 03:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O3N3s9J468 00:19:36.059 [2024-05-13 03:00:26.761979] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:36.059 03:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.O3N3s9J468 00:19:36.059 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.256 Initializing NVMe Controllers 00:19:48.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:48.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:48.256 Initialization complete. Launching workers. 00:19:48.256 ======================================================== 00:19:48.256 Latency(us) 00:19:48.256 Device Information : IOPS MiB/s Average min max 00:19:48.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7785.58 30.41 8223.07 1346.69 9337.83 00:19:48.256 ======================================================== 00:19:48.256 Total : 7785.58 30.41 8223.07 1346.69 9337.83 00:19:48.256 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O3N3s9J468 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O3N3s9J468' 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=367796 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 367796 /var/tmp/bdevperf.sock 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 367796 ']' 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:48.256 03:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.256 [2024-05-13 03:00:36.935355] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:48.256 [2024-05-13 03:00:36.935436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367796 ] 00:19:48.256 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.256 [2024-05-13 03:00:36.968151] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:48.256 [2024-05-13 03:00:36.996338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.256 [2024-05-13 03:00:37.080657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.256 03:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:48.256 03:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:48.256 03:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O3N3s9J468 00:19:48.256 [2024-05-13 03:00:37.406391] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.256 [2024-05-13 03:00:37.406527] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:48.256 TLSTESTn1 00:19:48.256 03:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:48.256 Running I/O for 10 seconds... 00:19:58.228 00:19:58.228 Latency(us) 00:19:58.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.228 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:58.228 Verification LBA range: start 0x0 length 0x2000 00:19:58.228 TLSTESTn1 : 10.09 997.83 3.90 0.00 0.00 127773.00 7039.05 173985.94 00:19:58.228 =================================================================================================================== 00:19:58.228 Total : 997.83 3.90 0.00 0.00 127773.00 7039.05 173985.94 00:19:58.228 0 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 367796 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 367796 ']' 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 367796 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 367796 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 367796' 00:19:58.228 killing process with pid 367796 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 367796 00:19:58.228 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.228 00:19:58.228 Latency(us) 00:19:58.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.228 =================================================================================================================== 00:19:58.228 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.228 [2024-05-13 03:00:47.764907] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 367796 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jzpu8fuOsx 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jzpu8fuOsx 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jzpu8fuOsx 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jzpu8fuOsx' 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=369116 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 369116 /var/tmp/bdevperf.sock 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 369116 ']' 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:58.228 03:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.228 [2024-05-13 03:00:48.022821] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:58.228 [2024-05-13 03:00:48.022904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369116 ] 00:19:58.228 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.228 [2024-05-13 03:00:48.053670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:58.228 [2024-05-13 03:00:48.080466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.228 [2024-05-13 03:00:48.160566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.228 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:58.228 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:58.228 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jzpu8fuOsx 00:19:58.228 [2024-05-13 03:00:48.507059] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.228 [2024-05-13 03:00:48.507173] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:58.228 [2024-05-13 03:00:48.512524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:58.228 [2024-05-13 03:00:48.513101] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f591f0 (107): Transport endpoint is not connected 00:19:58.228 [2024-05-13 03:00:48.514087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f591f0 (9): Bad file descriptor 00:19:58.228 [2024-05-13 03:00:48.515087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.228 [2024-05-13 03:00:48.515108] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:58.228 [2024-05-13 03:00:48.515137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.228 request: 00:19:58.228 { 00:19:58.228 "name": "TLSTEST", 00:19:58.228 "trtype": "tcp", 00:19:58.228 "traddr": "10.0.0.2", 00:19:58.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.228 "adrfam": "ipv4", 00:19:58.228 "trsvcid": "4420", 00:19:58.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.228 "psk": "/tmp/tmp.jzpu8fuOsx", 00:19:58.228 "method": "bdev_nvme_attach_controller", 00:19:58.228 "req_id": 1 00:19:58.228 } 00:19:58.228 Got JSON-RPC error response 00:19:58.228 response: 00:19:58.228 { 00:19:58.229 "code": -32602, 00:19:58.229 "message": "Invalid parameters" 00:19:58.229 } 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 369116 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 369116 ']' 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 369116 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 369116 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 369116' 00:19:58.229 killing process with pid 369116 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 369116 00:19:58.229 Received shutdown signal, test time was about 2.666569 seconds 00:19:58.229 00:19:58.229 Latency(us) 00:19:58.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.229 =================================================================================================================== 00:19:58.229 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.229 [2024-05-13 03:00:48.567213] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 369116 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O3N3s9J468 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O3N3s9J468 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O3N3s9J468 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O3N3s9J468' 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=369243 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 369243 /var/tmp/bdevperf.sock 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 369243 ']' 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:58.229 03:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.229 [2024-05-13 03:00:48.823181] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:58.229 [2024-05-13 03:00:48.823271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369243 ] 00:19:58.229 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.229 [2024-05-13 03:00:48.856255] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:58.229 [2024-05-13 03:00:48.885057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.229 [2024-05-13 03:00:48.971585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.486 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:58.486 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:58.486 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.O3N3s9J468 00:19:58.777 [2024-05-13 03:00:49.292692] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.777 [2024-05-13 03:00:49.292855] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:58.777 [2024-05-13 03:00:49.302165] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:58.777 [2024-05-13 03:00:49.302211] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:58.778 [2024-05-13 03:00:49.302247] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:58.778 [2024-05-13 03:00:49.302891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16341f0 (107): Transport endpoint is not connected 00:19:58.778 [2024-05-13 03:00:49.303881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16341f0 (9): Bad file descriptor 00:19:58.778 [2024-05-13 03:00:49.304881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.778 [2024-05-13 03:00:49.304901] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:58.778 [2024-05-13 03:00:49.304930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.778 request: 00:19:58.778 { 00:19:58.778 "name": "TLSTEST", 00:19:58.778 "trtype": "tcp", 00:19:58.778 "traddr": "10.0.0.2", 00:19:58.778 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:58.778 "adrfam": "ipv4", 00:19:58.778 "trsvcid": "4420", 00:19:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.778 "psk": "/tmp/tmp.O3N3s9J468", 00:19:58.778 "method": "bdev_nvme_attach_controller", 00:19:58.778 "req_id": 1 00:19:58.778 } 00:19:58.778 Got JSON-RPC error response 00:19:58.778 response: 00:19:58.778 { 00:19:58.778 "code": -32602, 00:19:58.778 "message": "Invalid parameters" 00:19:58.778 } 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 369243 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 369243 ']' 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 369243 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 369243 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 369243' 00:19:58.778 killing process with pid 369243 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 369243 00:19:58.778 Received shutdown signal, test time was about 3.446835 seconds 00:19:58.778 00:19:58.778 Latency(us) 00:19:58.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.778 =================================================================================================================== 00:19:58.778 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.778 [2024-05-13 03:00:49.349388] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 369243 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O3N3s9J468 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O3N3s9J468 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O3N3s9J468 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O3N3s9J468' 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=369354 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 369354 /var/tmp/bdevperf.sock 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 369354 ']' 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:58.778 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.037 [2024-05-13 03:00:49.589234] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:59.037 [2024-05-13 03:00:49.589315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369354 ] 00:19:59.037 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.037 [2024-05-13 03:00:49.620694] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:59.037 [2024-05-13 03:00:49.648571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.037 [2024-05-13 03:00:49.738888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.294 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:59.294 03:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:59.294 03:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O3N3s9J468 00:19:59.294 [2024-05-13 03:00:50.093945] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.294 [2024-05-13 03:00:50.094092] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:59.553 [2024-05-13 03:00:50.104219] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:59.553 [2024-05-13 03:00:50.104250] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:59.553 [2024-05-13 03:00:50.104303] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:59.553 [2024-05-13 03:00:50.105291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5941f0 (107): Transport endpoint is not connected 00:19:59.553 [2024-05-13 03:00:50.106286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5941f0 (9): Bad file descriptor 00:19:59.553 [2024-05-13 03:00:50.107286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:59.553 [2024-05-13 03:00:50.107306] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:59.553 [2024-05-13 03:00:50.107334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:59.553 request: 00:19:59.553 { 00:19:59.553 "name": "TLSTEST", 00:19:59.553 "trtype": "tcp", 00:19:59.553 "traddr": "10.0.0.2", 00:19:59.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.553 "adrfam": "ipv4", 00:19:59.553 "trsvcid": "4420", 00:19:59.553 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:59.553 "psk": "/tmp/tmp.O3N3s9J468", 00:19:59.553 "method": "bdev_nvme_attach_controller", 00:19:59.553 "req_id": 1 00:19:59.553 } 00:19:59.553 Got JSON-RPC error response 00:19:59.553 response: 00:19:59.553 { 00:19:59.553 "code": -32602, 00:19:59.553 "message": "Invalid parameters" 00:19:59.553 } 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 369354 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 369354 ']' 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 369354 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 369354 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 369354' 00:19:59.553 killing process with pid 369354 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 369354 00:19:59.553 Received shutdown signal, test time was about 4.255378 seconds 00:19:59.553 00:19:59.553 Latency(us) 00:19:59.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.553 =================================================================================================================== 00:19:59.553 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.553 [2024-05-13 03:00:50.159913] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:59.553 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 369354 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=369405 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 369405 /var/tmp/bdevperf.sock 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 369405 ']' 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:59.811 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.811 [2024-05-13 03:00:50.427306] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:19:59.811 [2024-05-13 03:00:50.427391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369405 ] 00:19:59.811 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.811 [2024-05-13 03:00:50.459786] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:59.811 [2024-05-13 03:00:50.488071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.811 [2024-05-13 03:00:50.573438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.069 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:00.069 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:00.069 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:00.327 [2024-05-13 03:00:50.906357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:00.327 [2024-05-13 03:00:50.907747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ad840 (9): Bad file descriptor 00:20:00.327 [2024-05-13 03:00:50.908742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:00.327 [2024-05-13 03:00:50.908762] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:00.327 [2024-05-13 03:00:50.908791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.327 request: 00:20:00.327 { 00:20:00.327 "name": "TLSTEST", 00:20:00.327 "trtype": "tcp", 00:20:00.327 "traddr": "10.0.0.2", 00:20:00.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.327 "adrfam": "ipv4", 00:20:00.327 "trsvcid": "4420", 00:20:00.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.327 "method": "bdev_nvme_attach_controller", 00:20:00.327 "req_id": 1 00:20:00.327 } 00:20:00.327 Got JSON-RPC error response 00:20:00.327 response: 00:20:00.327 { 00:20:00.327 "code": -32602, 00:20:00.327 "message": "Invalid parameters" 00:20:00.327 } 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 369405 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 369405 ']' 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 369405 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 369405 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 369405' 00:20:00.327 killing process with pid 369405 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 369405 00:20:00.327 Received shutdown signal, test time was about 5.052973 seconds 00:20:00.327 00:20:00.327 Latency(us) 00:20:00.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.327 =================================================================================================================== 00:20:00.327 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.327 03:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 369405 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 366020 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 366020 ']' 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 366020 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 366020 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 366020' 00:20:00.585 killing process with pid 366020 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 366020 00:20:00.585 [2024-05-13 03:00:51.198854] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:00.585 [2024-05-13 03:00:51.198911] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:00.585 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 366020 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:00.843 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.urbfOxNyDE 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.urbfOxNyDE 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=369556 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 369556 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 369556 ']' 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:00.844 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.844 [2024-05-13 03:00:51.560862] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:00.844 [2024-05-13 03:00:51.560941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.844 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.844 [2024-05-13 03:00:51.596058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:00.844 [2024-05-13 03:00:51.627866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.101 [2024-05-13 03:00:51.723493] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.101 [2024-05-13 03:00:51.723558] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.102 [2024-05-13 03:00:51.723574] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.102 [2024-05-13 03:00:51.723588] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.102 [2024-05-13 03:00:51.723599] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.102 [2024-05-13 03:00:51.723631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.102 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:01.102 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:01.102 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.102 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.102 03:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.102 03:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.102 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.urbfOxNyDE 00:20:01.102 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.urbfOxNyDE 00:20:01.102 03:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:01.359 [2024-05-13 03:00:52.139158] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.359 03:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:01.924 03:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.924 [2024-05-13 03:00:52.640470] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:01.924 [2024-05-13 03:00:52.640590] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.924 [2024-05-13 03:00:52.640812] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.924 03:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:02.182 malloc0 00:20:02.182 03:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:02.439 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.urbfOxNyDE 00:20:02.697 [2024-05-13 03:00:53.463075] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.urbfOxNyDE 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.urbfOxNyDE' 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=369840 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 369840 /var/tmp/bdevperf.sock 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 369840 ']' 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:02.697 03:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.956 [2024-05-13 03:00:53.524465] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:02.956 [2024-05-13 03:00:53.524539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369840 ] 00:20:02.956 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.956 [2024-05-13 03:00:53.555103] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:02.956 [2024-05-13 03:00:53.581640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.956 [2024-05-13 03:00:53.664680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.214 03:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.214 03:00:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:03.214 03:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.urbfOxNyDE 00:20:03.214 [2024-05-13 03:00:54.005170] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.214 [2024-05-13 03:00:54.005301] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:03.472 TLSTESTn1 00:20:03.472 03:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:03.472 Running I/O for 10 seconds... 00:20:15.668 00:20:15.668 Latency(us) 00:20:15.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.668 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.668 Verification LBA range: start 0x0 length 0x2000 00:20:15.668 TLSTESTn1 : 10.12 803.80 3.14 0.00 0.00 158483.14 6165.24 142917.03 00:20:15.669 =================================================================================================================== 00:20:15.669 Total : 803.80 3.14 0.00 0.00 158483.14 6165.24 142917.03 00:20:15.669 0 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 369840 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 369840 ']' 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 369840 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 369840 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 369840' 00:20:15.669 killing process with pid 369840 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 369840 00:20:15.669 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.669 00:20:15.669 Latency(us) 00:20:15.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.669 =================================================================================================================== 00:20:15.669 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.669 [2024-05-13 03:01:04.383877] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 369840 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.urbfOxNyDE 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.urbfOxNyDE 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.urbfOxNyDE 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.urbfOxNyDE 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.urbfOxNyDE' 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=371152 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 371152 /var/tmp/bdevperf.sock 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 371152 ']' 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.669 [2024-05-13 03:01:04.624577] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:15.669 [2024-05-13 03:01:04.624662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid371152 ] 00:20:15.669 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.669 [2024-05-13 03:01:04.656718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:15.669 [2024-05-13 03:01:04.685348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.669 [2024-05-13 03:01:04.776328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:15.669 03:01:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.urbfOxNyDE 00:20:15.669 [2024-05-13 03:01:05.123525] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.669 [2024-05-13 03:01:05.123603] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:15.669 [2024-05-13 03:01:05.123618] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.urbfOxNyDE 00:20:15.669 request: 00:20:15.669 { 00:20:15.669 "name": "TLSTEST", 00:20:15.669 "trtype": "tcp", 00:20:15.669 "traddr": "10.0.0.2", 00:20:15.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.669 "adrfam": "ipv4", 00:20:15.669 "trsvcid": "4420", 00:20:15.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.669 "psk": "/tmp/tmp.urbfOxNyDE", 00:20:15.669 "method": "bdev_nvme_attach_controller", 00:20:15.669 "req_id": 1 00:20:15.669 } 00:20:15.669 Got JSON-RPC error response 00:20:15.669 response: 00:20:15.669 { 00:20:15.669 "code": -1, 00:20:15.669 "message": "Operation not permitted" 00:20:15.669 } 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 371152 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 371152 ']' 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 371152 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 371152 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 371152' 00:20:15.669 killing process with pid 371152 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 371152 00:20:15.669 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.669 00:20:15.669 Latency(us) 00:20:15.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.669 =================================================================================================================== 00:20:15.669 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 371152 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 369556 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 369556 ']' 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 369556 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 369556 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 369556' 00:20:15.669 killing process with pid 369556 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 369556 00:20:15.669 [2024-05-13 03:01:05.417074] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:15.669 [2024-05-13 03:01:05.417139] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 369556 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=371297 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 371297 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 371297 ']' 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:15.669 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.669 [2024-05-13 03:01:05.711475] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:15.670 [2024-05-13 03:01:05.711563] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.670 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.670 [2024-05-13 03:01:05.751548] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:15.670 [2024-05-13 03:01:05.783859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.670 [2024-05-13 03:01:05.873583] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.670 [2024-05-13 03:01:05.873645] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.670 [2024-05-13 03:01:05.873660] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.670 [2024-05-13 03:01:05.873674] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.670 [2024-05-13 03:01:05.873686] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.670 [2024-05-13 03:01:05.873728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.670 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:15.670 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:15.670 03:01:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.670 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.670 03:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.urbfOxNyDE 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.urbfOxNyDE 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.urbfOxNyDE 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.urbfOxNyDE 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:15.670 [2024-05-13 03:01:06.224239] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.670 03:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:15.928 03:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:15.928 [2024-05-13 03:01:06.717497] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:15.928 [2024-05-13 03:01:06.717581] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.928 [2024-05-13 03:01:06.717803] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.186 03:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:16.186 malloc0 00:20:16.186 03:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:16.444 03:01:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.urbfOxNyDE 00:20:16.703 [2024-05-13 03:01:07.455416] tcp.c:3567:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:16.703 [2024-05-13 03:01:07.455461] tcp.c:3653:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:16.703 [2024-05-13 03:01:07.455504] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:16.703 request: 00:20:16.703 { 00:20:16.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.703 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.703 "psk": "/tmp/tmp.urbfOxNyDE", 00:20:16.703 "method": "nvmf_subsystem_add_host", 00:20:16.703 "req_id": 1 00:20:16.703 } 00:20:16.703 Got JSON-RPC error response 00:20:16.703 response: 00:20:16.703 { 00:20:16.703 "code": -32603, 00:20:16.703 "message": "Internal error" 00:20:16.703 } 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 371297 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 371297 ']' 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 371297 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 371297 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 371297' 00:20:16.703 killing process with pid 371297 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 371297 00:20:16.703 [2024-05-13 03:01:07.499096] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:16.703 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 371297 00:20:16.960 03:01:07 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.urbfOxNyDE 00:20:16.960 03:01:07 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:16.960 03:01:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.960 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:16.960 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.960 03:01:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=371542 00:20:16.961 03:01:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:16.961 03:01:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 371542 00:20:16.961 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 371542 ']' 00:20:16.961 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.961 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:16.961 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.961 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:16.961 03:01:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.218 [2024-05-13 03:01:07.779193] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:17.218 [2024-05-13 03:01:07.779274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.218 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.218 [2024-05-13 03:01:07.815904] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:17.218 [2024-05-13 03:01:07.842887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.218 [2024-05-13 03:01:07.929058] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.218 [2024-05-13 03:01:07.929116] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.218 [2024-05-13 03:01:07.929144] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.218 [2024-05-13 03:01:07.929156] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.218 [2024-05-13 03:01:07.929166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.218 [2024-05-13 03:01:07.929203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.475 03:01:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:17.475 03:01:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:17.475 03:01:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.475 03:01:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.475 03:01:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.475 03:01:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.475 03:01:08 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.urbfOxNyDE 00:20:17.475 03:01:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.urbfOxNyDE 00:20:17.475 03:01:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.732 [2024-05-13 03:01:08.287641] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.732 03:01:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.989 03:01:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.989 [2024-05-13 03:01:08.768876] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:17.989 [2024-05-13 03:01:08.768967] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.989 [2024-05-13 03:01:08.769179] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.989 03:01:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.247 malloc0 00:20:18.247 03:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.505 03:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.urbfOxNyDE 00:20:18.762 [2024-05-13 03:01:09.510178] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=371759 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 371759 /var/tmp/bdevperf.sock 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 371759 ']' 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:18.762 03:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.021 [2024-05-13 03:01:09.572811] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:19.021 [2024-05-13 03:01:09.572889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid371759 ] 00:20:19.021 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.021 [2024-05-13 03:01:09.604251] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:19.021 [2024-05-13 03:01:09.631998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.021 [2024-05-13 03:01:09.716541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.021 03:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:19.021 03:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:19.021 03:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.urbfOxNyDE 00:20:19.280 [2024-05-13 03:01:10.060148] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.280 [2024-05-13 03:01:10.060286] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:19.538 TLSTESTn1 00:20:19.538 03:01:10 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:19.796 03:01:10 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:19.796 "subsystems": [ 00:20:19.796 { 00:20:19.796 "subsystem": "keyring", 00:20:19.796 "config": [] 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "subsystem": "iobuf", 00:20:19.796 "config": [ 00:20:19.796 { 00:20:19.796 "method": "iobuf_set_options", 00:20:19.796 "params": { 00:20:19.796 "small_pool_count": 8192, 00:20:19.796 "large_pool_count": 1024, 00:20:19.796 "small_bufsize": 8192, 00:20:19.796 "large_bufsize": 135168 00:20:19.796 } 00:20:19.796 } 00:20:19.796 ] 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "subsystem": "sock", 00:20:19.796 "config": [ 00:20:19.796 { 00:20:19.796 "method": "sock_impl_set_options", 00:20:19.796 "params": { 00:20:19.796 "impl_name": "posix", 00:20:19.796 "recv_buf_size": 2097152, 00:20:19.796 "send_buf_size": 2097152, 00:20:19.796 "enable_recv_pipe": true, 00:20:19.796 "enable_quickack": false, 00:20:19.796 "enable_placement_id": 0, 00:20:19.796 "enable_zerocopy_send_server": true, 00:20:19.796 "enable_zerocopy_send_client": false, 00:20:19.796 "zerocopy_threshold": 0, 00:20:19.796 "tls_version": 0, 00:20:19.796 "enable_ktls": false 00:20:19.796 } 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "method": "sock_impl_set_options", 00:20:19.796 "params": { 00:20:19.796 "impl_name": "ssl", 00:20:19.796 "recv_buf_size": 4096, 00:20:19.796 "send_buf_size": 4096, 00:20:19.796 "enable_recv_pipe": true, 00:20:19.796 "enable_quickack": false, 00:20:19.796 "enable_placement_id": 0, 00:20:19.796 "enable_zerocopy_send_server": true, 00:20:19.796 "enable_zerocopy_send_client": false, 00:20:19.796 "zerocopy_threshold": 0, 00:20:19.796 "tls_version": 0, 00:20:19.796 "enable_ktls": false 00:20:19.796 } 00:20:19.796 } 00:20:19.796 ] 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "subsystem": "vmd", 00:20:19.796 "config": [] 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "subsystem": "accel", 00:20:19.796 "config": [ 00:20:19.796 { 00:20:19.796 "method": "accel_set_options", 00:20:19.796 "params": { 00:20:19.796 "small_cache_size": 128, 00:20:19.796 "large_cache_size": 16, 00:20:19.796 "task_count": 2048, 00:20:19.796 "sequence_count": 2048, 00:20:19.796 "buf_count": 2048 00:20:19.796 } 00:20:19.796 } 00:20:19.796 ] 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "subsystem": "bdev", 00:20:19.796 "config": [ 00:20:19.796 { 00:20:19.796 "method": "bdev_set_options", 00:20:19.796 "params": { 00:20:19.796 "bdev_io_pool_size": 65535, 00:20:19.796 "bdev_io_cache_size": 256, 00:20:19.796 "bdev_auto_examine": true, 00:20:19.796 "iobuf_small_cache_size": 128, 00:20:19.796 "iobuf_large_cache_size": 16 00:20:19.796 } 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "method": "bdev_raid_set_options", 00:20:19.796 "params": { 00:20:19.796 "process_window_size_kb": 1024 00:20:19.796 } 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "method": "bdev_iscsi_set_options", 00:20:19.796 "params": { 00:20:19.796 "timeout_sec": 30 00:20:19.796 } 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "method": "bdev_nvme_set_options", 00:20:19.796 "params": { 00:20:19.796 "action_on_timeout": "none", 00:20:19.796 "timeout_us": 0, 00:20:19.796 "timeout_admin_us": 0, 00:20:19.796 "keep_alive_timeout_ms": 10000, 00:20:19.796 "arbitration_burst": 0, 00:20:19.796 "low_priority_weight": 0, 00:20:19.796 "medium_priority_weight": 0, 00:20:19.796 "high_priority_weight": 0, 00:20:19.796 "nvme_adminq_poll_period_us": 10000, 00:20:19.796 "nvme_ioq_poll_period_us": 0, 00:20:19.796 "io_queue_requests": 0, 00:20:19.796 "delay_cmd_submit": true, 00:20:19.796 "transport_retry_count": 4, 00:20:19.796 "bdev_retry_count": 3, 00:20:19.796 "transport_ack_timeout": 0, 00:20:19.796 "ctrlr_loss_timeout_sec": 0, 00:20:19.796 "reconnect_delay_sec": 0, 00:20:19.796 "fast_io_fail_timeout_sec": 0, 00:20:19.796 "disable_auto_failback": false, 00:20:19.796 "generate_uuids": false, 00:20:19.796 "transport_tos": 0, 00:20:19.796 "nvme_error_stat": false, 00:20:19.796 "rdma_srq_size": 0, 00:20:19.796 "io_path_stat": false, 00:20:19.796 "allow_accel_sequence": false, 00:20:19.796 "rdma_max_cq_size": 0, 00:20:19.796 "rdma_cm_event_timeout_ms": 0, 00:20:19.796 "dhchap_digests": [ 00:20:19.796 "sha256", 00:20:19.796 "sha384", 00:20:19.796 "sha512" 00:20:19.796 ], 00:20:19.796 "dhchap_dhgroups": [ 00:20:19.796 "null", 00:20:19.796 "ffdhe2048", 00:20:19.796 "ffdhe3072", 00:20:19.796 "ffdhe4096", 00:20:19.796 "ffdhe6144", 00:20:19.796 "ffdhe8192" 00:20:19.796 ] 00:20:19.796 } 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "method": "bdev_nvme_set_hotplug", 00:20:19.796 "params": { 00:20:19.796 "period_us": 100000, 00:20:19.796 "enable": false 00:20:19.796 } 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "method": "bdev_malloc_create", 00:20:19.796 "params": { 00:20:19.796 "name": "malloc0", 00:20:19.796 "num_blocks": 8192, 00:20:19.796 "block_size": 4096, 00:20:19.796 "physical_block_size": 4096, 00:20:19.796 "uuid": "b15d9ba8-b30f-410e-ac7f-b6ace987aab6", 00:20:19.796 "optimal_io_boundary": 0 00:20:19.796 } 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "method": "bdev_wait_for_examine" 00:20:19.796 } 00:20:19.796 ] 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "subsystem": "nbd", 00:20:19.796 "config": [] 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "subsystem": "scheduler", 00:20:19.796 "config": [ 00:20:19.796 { 00:20:19.796 "method": "framework_set_scheduler", 00:20:19.796 "params": { 00:20:19.796 "name": "static" 00:20:19.796 } 00:20:19.796 } 00:20:19.796 ] 00:20:19.796 }, 00:20:19.796 { 00:20:19.796 "subsystem": "nvmf", 00:20:19.796 "config": [ 00:20:19.796 { 00:20:19.796 "method": "nvmf_set_config", 00:20:19.796 "params": { 00:20:19.796 "discovery_filter": "match_any", 00:20:19.796 "admin_cmd_passthru": { 00:20:19.797 "identify_ctrlr": false 00:20:19.797 } 00:20:19.797 } 00:20:19.797 }, 00:20:19.797 { 00:20:19.797 "method": "nvmf_set_max_subsystems", 00:20:19.797 "params": { 00:20:19.797 "max_subsystems": 1024 00:20:19.797 } 00:20:19.797 }, 00:20:19.797 { 00:20:19.797 "method": "nvmf_set_crdt", 00:20:19.797 "params": { 00:20:19.797 "crdt1": 0, 00:20:19.797 "crdt2": 0, 00:20:19.797 "crdt3": 0 00:20:19.797 } 00:20:19.797 }, 00:20:19.797 { 00:20:19.797 "method": "nvmf_create_transport", 00:20:19.797 "params": { 00:20:19.797 "trtype": "TCP", 00:20:19.797 "max_queue_depth": 128, 00:20:19.797 "max_io_qpairs_per_ctrlr": 127, 00:20:19.797 "in_capsule_data_size": 4096, 00:20:19.797 "max_io_size": 131072, 00:20:19.797 "io_unit_size": 131072, 00:20:19.797 "max_aq_depth": 128, 00:20:19.797 "num_shared_buffers": 511, 00:20:19.797 "buf_cache_size": 4294967295, 00:20:19.797 "dif_insert_or_strip": false, 00:20:19.797 "zcopy": false, 00:20:19.797 "c2h_success": false, 00:20:19.797 "sock_priority": 0, 00:20:19.797 "abort_timeout_sec": 1, 00:20:19.797 "ack_timeout": 0, 00:20:19.797 "data_wr_pool_size": 0 00:20:19.797 } 00:20:19.797 }, 00:20:19.797 { 00:20:19.797 "method": "nvmf_create_subsystem", 00:20:19.797 "params": { 00:20:19.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.797 "allow_any_host": false, 00:20:19.797 "serial_number": "SPDK00000000000001", 00:20:19.797 "model_number": "SPDK bdev Controller", 00:20:19.797 "max_namespaces": 10, 00:20:19.797 "min_cntlid": 1, 00:20:19.797 "max_cntlid": 65519, 00:20:19.797 "ana_reporting": false 00:20:19.797 } 00:20:19.797 }, 00:20:19.797 { 00:20:19.797 "method": "nvmf_subsystem_add_host", 00:20:19.797 "params": { 00:20:19.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.797 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.797 "psk": "/tmp/tmp.urbfOxNyDE" 00:20:19.797 } 00:20:19.797 }, 00:20:19.797 { 00:20:19.797 "method": "nvmf_subsystem_add_ns", 00:20:19.797 "params": { 00:20:19.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.797 "namespace": { 00:20:19.797 "nsid": 1, 00:20:19.797 "bdev_name": "malloc0", 00:20:19.797 "nguid": "B15D9BA8B30F410EAC7FB6ACE987AAB6", 00:20:19.797 "uuid": "b15d9ba8-b30f-410e-ac7f-b6ace987aab6", 00:20:19.797 "no_auto_visible": false 00:20:19.797 } 00:20:19.797 } 00:20:19.797 }, 00:20:19.797 { 00:20:19.797 "method": "nvmf_subsystem_add_listener", 00:20:19.797 "params": { 00:20:19.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.797 "listen_address": { 00:20:19.797 "trtype": "TCP", 00:20:19.797 "adrfam": "IPv4", 00:20:19.797 "traddr": "10.0.0.2", 00:20:19.797 "trsvcid": "4420" 00:20:19.797 }, 00:20:19.797 "secure_channel": true 00:20:19.797 } 00:20:19.797 } 00:20:19.797 ] 00:20:19.797 } 00:20:19.797 ] 00:20:19.797 }' 00:20:19.797 03:01:10 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:20.055 03:01:10 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:20.055 "subsystems": [ 00:20:20.055 { 00:20:20.055 "subsystem": "keyring", 00:20:20.055 "config": [] 00:20:20.055 }, 00:20:20.055 { 00:20:20.055 "subsystem": "iobuf", 00:20:20.055 "config": [ 00:20:20.055 { 00:20:20.055 "method": "iobuf_set_options", 00:20:20.055 "params": { 00:20:20.055 "small_pool_count": 8192, 00:20:20.055 "large_pool_count": 1024, 00:20:20.055 "small_bufsize": 8192, 00:20:20.055 "large_bufsize": 135168 00:20:20.055 } 00:20:20.055 } 00:20:20.055 ] 00:20:20.055 }, 00:20:20.055 { 00:20:20.055 "subsystem": "sock", 00:20:20.055 "config": [ 00:20:20.055 { 00:20:20.055 "method": "sock_impl_set_options", 00:20:20.055 "params": { 00:20:20.055 "impl_name": "posix", 00:20:20.055 "recv_buf_size": 2097152, 00:20:20.055 "send_buf_size": 2097152, 00:20:20.055 "enable_recv_pipe": true, 00:20:20.055 "enable_quickack": false, 00:20:20.055 "enable_placement_id": 0, 00:20:20.055 "enable_zerocopy_send_server": true, 00:20:20.055 "enable_zerocopy_send_client": false, 00:20:20.055 "zerocopy_threshold": 0, 00:20:20.055 "tls_version": 0, 00:20:20.055 "enable_ktls": false 00:20:20.055 } 00:20:20.055 }, 00:20:20.055 { 00:20:20.055 "method": "sock_impl_set_options", 00:20:20.055 "params": { 00:20:20.055 "impl_name": "ssl", 00:20:20.055 "recv_buf_size": 4096, 00:20:20.055 "send_buf_size": 4096, 00:20:20.055 "enable_recv_pipe": true, 00:20:20.055 "enable_quickack": false, 00:20:20.055 "enable_placement_id": 0, 00:20:20.055 "enable_zerocopy_send_server": true, 00:20:20.055 "enable_zerocopy_send_client": false, 00:20:20.055 "zerocopy_threshold": 0, 00:20:20.055 "tls_version": 0, 00:20:20.055 "enable_ktls": false 00:20:20.055 } 00:20:20.055 } 00:20:20.055 ] 00:20:20.055 }, 00:20:20.055 { 00:20:20.055 "subsystem": "vmd", 00:20:20.055 "config": [] 00:20:20.055 }, 00:20:20.055 { 00:20:20.055 "subsystem": "accel", 00:20:20.055 "config": [ 00:20:20.055 { 00:20:20.055 "method": "accel_set_options", 00:20:20.056 "params": { 00:20:20.056 "small_cache_size": 128, 00:20:20.056 "large_cache_size": 16, 00:20:20.056 "task_count": 2048, 00:20:20.056 "sequence_count": 2048, 00:20:20.056 "buf_count": 2048 00:20:20.056 } 00:20:20.056 } 00:20:20.056 ] 00:20:20.056 }, 00:20:20.056 { 00:20:20.056 "subsystem": "bdev", 00:20:20.056 "config": [ 00:20:20.056 { 00:20:20.056 "method": "bdev_set_options", 00:20:20.056 "params": { 00:20:20.056 "bdev_io_pool_size": 65535, 00:20:20.056 "bdev_io_cache_size": 256, 00:20:20.056 "bdev_auto_examine": true, 00:20:20.056 "iobuf_small_cache_size": 128, 00:20:20.056 "iobuf_large_cache_size": 16 00:20:20.056 } 00:20:20.056 }, 00:20:20.056 { 00:20:20.056 "method": "bdev_raid_set_options", 00:20:20.056 "params": { 00:20:20.056 "process_window_size_kb": 1024 00:20:20.056 } 00:20:20.056 }, 00:20:20.056 { 00:20:20.056 "method": "bdev_iscsi_set_options", 00:20:20.056 "params": { 00:20:20.056 "timeout_sec": 30 00:20:20.056 } 00:20:20.056 }, 00:20:20.056 { 00:20:20.056 "method": "bdev_nvme_set_options", 00:20:20.056 "params": { 00:20:20.056 "action_on_timeout": "none", 00:20:20.056 "timeout_us": 0, 00:20:20.056 "timeout_admin_us": 0, 00:20:20.056 "keep_alive_timeout_ms": 10000, 00:20:20.056 "arbitration_burst": 0, 00:20:20.056 "low_priority_weight": 0, 00:20:20.056 "medium_priority_weight": 0, 00:20:20.056 "high_priority_weight": 0, 00:20:20.056 "nvme_adminq_poll_period_us": 10000, 00:20:20.056 "nvme_ioq_poll_period_us": 0, 00:20:20.056 "io_queue_requests": 512, 00:20:20.056 "delay_cmd_submit": true, 00:20:20.056 "transport_retry_count": 4, 00:20:20.056 "bdev_retry_count": 3, 00:20:20.056 "transport_ack_timeout": 0, 00:20:20.056 "ctrlr_loss_timeout_sec": 0, 00:20:20.056 "reconnect_delay_sec": 0, 00:20:20.056 "fast_io_fail_timeout_sec": 0, 00:20:20.056 "disable_auto_failback": false, 00:20:20.056 "generate_uuids": false, 00:20:20.056 "transport_tos": 0, 00:20:20.056 "nvme_error_stat": false, 00:20:20.056 "rdma_srq_size": 0, 00:20:20.056 "io_path_stat": false, 00:20:20.056 "allow_accel_sequence": false, 00:20:20.056 "rdma_max_cq_size": 0, 00:20:20.056 "rdma_cm_event_timeout_ms": 0, 00:20:20.056 "dhchap_digests": [ 00:20:20.056 "sha256", 00:20:20.056 "sha384", 00:20:20.056 "sha512" 00:20:20.056 ], 00:20:20.056 "dhchap_dhgroups": [ 00:20:20.056 "null", 00:20:20.056 "ffdhe2048", 00:20:20.056 "ffdhe3072", 00:20:20.056 "ffdhe4096", 00:20:20.056 "ffdhe6144", 00:20:20.056 "ffdhe8192" 00:20:20.056 ] 00:20:20.056 } 00:20:20.056 }, 00:20:20.056 { 00:20:20.056 "method": "bdev_nvme_attach_controller", 00:20:20.056 "params": { 00:20:20.056 "name": "TLSTEST", 00:20:20.056 "trtype": "TCP", 00:20:20.056 "adrfam": "IPv4", 00:20:20.056 "traddr": "10.0.0.2", 00:20:20.056 "trsvcid": "4420", 00:20:20.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.056 "prchk_reftag": false, 00:20:20.056 "prchk_guard": false, 00:20:20.056 "ctrlr_loss_timeout_sec": 0, 00:20:20.056 "reconnect_delay_sec": 0, 00:20:20.056 "fast_io_fail_timeout_sec": 0, 00:20:20.056 "psk": "/tmp/tmp.urbfOxNyDE", 00:20:20.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.056 "hdgst": false, 00:20:20.056 "ddgst": false 00:20:20.056 } 00:20:20.056 }, 00:20:20.056 { 00:20:20.056 "method": "bdev_nvme_set_hotplug", 00:20:20.056 "params": { 00:20:20.056 "period_us": 100000, 00:20:20.056 "enable": false 00:20:20.056 } 00:20:20.056 }, 00:20:20.056 { 00:20:20.056 "method": "bdev_wait_for_examine" 00:20:20.056 } 00:20:20.056 ] 00:20:20.056 }, 00:20:20.056 { 00:20:20.056 "subsystem": "nbd", 00:20:20.056 "config": [] 00:20:20.056 } 00:20:20.056 ] 00:20:20.056 }' 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 371759 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 371759 ']' 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 371759 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 371759 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 371759' 00:20:20.056 killing process with pid 371759 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 371759 00:20:20.056 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.056 00:20:20.056 Latency(us) 00:20:20.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.056 =================================================================================================================== 00:20:20.056 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.056 [2024-05-13 03:01:10.817362] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.056 03:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 371759 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 371542 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 371542 ']' 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 371542 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 371542 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 371542' 00:20:20.315 killing process with pid 371542 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 371542 00:20:20.315 [2024-05-13 03:01:11.071714] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:20.315 [2024-05-13 03:01:11.071766] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:20.315 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 371542 00:20:20.574 03:01:11 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:20.574 03:01:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.574 03:01:11 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:20.574 "subsystems": [ 00:20:20.574 { 00:20:20.574 "subsystem": "keyring", 00:20:20.574 "config": [] 00:20:20.574 }, 00:20:20.574 { 00:20:20.574 "subsystem": "iobuf", 00:20:20.574 "config": [ 00:20:20.574 { 00:20:20.574 "method": "iobuf_set_options", 00:20:20.574 "params": { 00:20:20.574 "small_pool_count": 8192, 00:20:20.574 "large_pool_count": 1024, 00:20:20.574 "small_bufsize": 8192, 00:20:20.574 "large_bufsize": 135168 00:20:20.574 } 00:20:20.574 } 00:20:20.574 ] 00:20:20.574 }, 00:20:20.574 { 00:20:20.574 "subsystem": "sock", 00:20:20.574 "config": [ 00:20:20.574 { 00:20:20.574 "method": "sock_impl_set_options", 00:20:20.574 "params": { 00:20:20.574 "impl_name": "posix", 00:20:20.574 "recv_buf_size": 2097152, 00:20:20.574 "send_buf_size": 2097152, 00:20:20.574 "enable_recv_pipe": true, 00:20:20.574 "enable_quickack": false, 00:20:20.574 "enable_placement_id": 0, 00:20:20.574 "enable_zerocopy_send_server": true, 00:20:20.574 "enable_zerocopy_send_client": false, 00:20:20.574 "zerocopy_threshold": 0, 00:20:20.574 "tls_version": 0, 00:20:20.574 "enable_ktls": false 00:20:20.574 } 00:20:20.574 }, 00:20:20.574 { 00:20:20.574 "method": "sock_impl_set_options", 00:20:20.574 "params": { 00:20:20.574 "impl_name": "ssl", 00:20:20.574 "recv_buf_size": 4096, 00:20:20.574 "send_buf_size": 4096, 00:20:20.574 "enable_recv_pipe": true, 00:20:20.574 "enable_quickack": false, 00:20:20.574 "enable_placement_id": 0, 00:20:20.574 "enable_zerocopy_send_server": true, 00:20:20.574 "enable_zerocopy_send_client": false, 00:20:20.574 "zerocopy_threshold": 0, 00:20:20.574 "tls_version": 0, 00:20:20.574 "enable_ktls": false 00:20:20.574 } 00:20:20.574 } 00:20:20.574 ] 00:20:20.574 }, 00:20:20.574 { 00:20:20.574 "subsystem": "vmd", 00:20:20.574 "config": [] 00:20:20.574 }, 00:20:20.574 { 00:20:20.575 "subsystem": "accel", 00:20:20.575 "config": [ 00:20:20.575 { 00:20:20.575 "method": "accel_set_options", 00:20:20.575 "params": { 00:20:20.575 "small_cache_size": 128, 00:20:20.575 "large_cache_size": 16, 00:20:20.575 "task_count": 2048, 00:20:20.575 "sequence_count": 2048, 00:20:20.575 "buf_count": 2048 00:20:20.575 } 00:20:20.575 } 00:20:20.575 ] 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "subsystem": "bdev", 00:20:20.575 "config": [ 00:20:20.575 { 00:20:20.575 "method": "bdev_set_options", 00:20:20.575 "params": { 00:20:20.575 "bdev_io_pool_size": 65535, 00:20:20.575 "bdev_io_cache_size": 256, 00:20:20.575 "bdev_auto_examine": true, 00:20:20.575 "iobuf_small_cache_size": 128, 00:20:20.575 "iobuf_large_cache_size": 16 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "bdev_raid_set_options", 00:20:20.575 "params": { 00:20:20.575 "process_window_size_kb": 1024 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "bdev_iscsi_set_options", 00:20:20.575 "params": { 00:20:20.575 "timeout_sec": 30 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "bdev_nvme_set_options", 00:20:20.575 "params": { 00:20:20.575 "action_on_timeout": "none", 00:20:20.575 "timeout_us": 0, 00:20:20.575 "timeout_admin_us": 0, 00:20:20.575 "keep_alive_timeout_ms": 10000, 00:20:20.575 "arbitration_burst": 0, 00:20:20.575 "low_priority_weight": 0, 00:20:20.575 "medium_priority_weight": 0, 00:20:20.575 "high_priority_weight": 0, 00:20:20.575 "nvme_adminq_poll_period_us": 10000, 00:20:20.575 "nvme_ioq_poll_period_us": 0, 00:20:20.575 "io_queue_requests": 0, 00:20:20.575 "delay_cmd_submit": true, 00:20:20.575 "transport_retry_count": 4, 00:20:20.575 "bdev_retry_count": 3, 00:20:20.575 "transport_ack_timeout": 0, 00:20:20.575 "ctrlr_loss_timeout_sec": 0, 00:20:20.575 "reconnect_delay_sec": 0, 00:20:20.575 "fast_io_fail_timeout_sec": 0, 00:20:20.575 "disable_auto_failback": false, 00:20:20.575 "generate_uuids": false, 00:20:20.575 "transport_tos": 0, 00:20:20.575 "nvme_error_stat": false, 00:20:20.575 "rdma_srq_size": 0, 00:20:20.575 "io_path_stat": false, 00:20:20.575 "allow_accel_sequence": false, 00:20:20.575 "rdma_max_cq_size": 0, 00:20:20.575 "rdma_cm_event_timeout_ms": 0, 00:20:20.575 "dhchap_digests": [ 00:20:20.575 "sha256", 00:20:20.575 "sha384", 00:20:20.575 "sha512" 00:20:20.575 ], 00:20:20.575 "dhchap_dhgroups": [ 00:20:20.575 "null", 00:20:20.575 "ffdhe2048", 00:20:20.575 "ffdhe3072", 00:20:20.575 "ffdhe4096", 00:20:20.575 "ffdhe6144", 00:20:20.575 "ffdhe8192" 00:20:20.575 ] 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "bdev_nvme_set_hotplug", 00:20:20.575 "params": { 00:20:20.575 "period_us": 100000, 00:20:20.575 "enable": false 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "bdev_malloc_create", 00:20:20.575 "params": { 00:20:20.575 "name": "malloc0", 00:20:20.575 "num_blocks": 8192, 00:20:20.575 "block_size": 4096, 00:20:20.575 "physical_block_size": 4096, 00:20:20.575 "uuid": "b15d9ba8-b30f-410e-ac7f-b6ace987aab6", 00:20:20.575 "optimal_io_boundary": 0 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "bdev_wait_for_examine" 00:20:20.575 } 00:20:20.575 ] 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "subsystem": "nbd", 00:20:20.575 "config": [] 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "subsystem": "scheduler", 00:20:20.575 "config": [ 00:20:20.575 { 00:20:20.575 "method": "framework_set_scheduler", 00:20:20.575 "params": { 00:20:20.575 "name": "static" 00:20:20.575 } 00:20:20.575 } 00:20:20.575 ] 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "subsystem": "nvmf", 00:20:20.575 "config": [ 00:20:20.575 { 00:20:20.575 "method": "nvmf_set_config", 00:20:20.575 "params": { 00:20:20.575 "discovery_filter": "match_any", 00:20:20.575 "admin_cmd_passthru": { 00:20:20.575 "identify_ctrlr": false 00:20:20.575 } 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "nvmf_set_max_subsystems", 00:20:20.575 "params": { 00:20:20.575 "max_subsystems": 1024 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "nvmf_set_crdt", 00:20:20.575 "params": { 00:20:20.575 "crdt1": 0, 00:20:20.575 "crdt2": 0, 00:20:20.575 "crdt3": 0 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "nvmf_create_transport", 00:20:20.575 "params": { 00:20:20.575 "trtype": "TCP", 00:20:20.575 "max_queue_depth": 128, 00:20:20.575 "max_io_qpairs_per_ctrlr": 127, 00:20:20.575 "in_capsule_data_size": 4096, 00:20:20.575 "max_io_size": 131072, 00:20:20.575 "io_unit_size": 131072, 00:20:20.575 "max_aq_depth": 128, 00:20:20.575 "num_shared_buffers": 511, 00:20:20.575 "buf_cache_size": 4294967295, 00:20:20.575 "dif_insert_or_strip": false, 00:20:20.575 "zcopy": false, 00:20:20.575 "c2h_success": false, 00:20:20.575 "sock_priority": 0, 00:20:20.575 "abort_timeout_sec": 1, 00:20:20.575 "ack_timeout": 0, 00:20:20.575 "data_wr_pool_size": 0 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "nvmf_create_subsystem", 00:20:20.575 "params": { 00:20:20.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.575 "allow_any_host": false, 00:20:20.575 "serial_number": "SPDK00000000000001", 00:20:20.575 "model_number": "SPDK bdev Controller", 00:20:20.575 "max_namespaces": 10, 00:20:20.575 "min_cntlid": 1, 00:20:20.575 "max_cntlid": 65519, 00:20:20.575 "ana_reporting": false 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "nvmf_subsystem_add_host", 00:20:20.575 "params": { 00:20:20.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.575 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.575 "psk": "/tmp/tmp.urbfOxNyDE" 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "nvmf_subsystem_add_ns", 00:20:20.575 "params": { 00:20:20.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.575 "namespace": { 00:20:20.575 "nsid": 1, 00:20:20.575 "bdev_name": "malloc0", 00:20:20.575 "nguid": "B15D9BA8B30F410EAC7FB6ACE987AAB6", 00:20:20.575 "uuid": "b15d9ba8-b30f-410e-ac7f-b6ace987aab6", 00:20:20.575 "no_auto_visible": false 00:20:20.575 } 00:20:20.575 } 00:20:20.575 }, 00:20:20.575 { 00:20:20.575 "method": "nvmf_subsystem_add_listener", 00:20:20.575 "params": { 00:20:20.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.575 "listen_address": { 00:20:20.575 "trtype": "TCP", 00:20:20.575 "adrfam": "IPv4", 00:20:20.575 "traddr": "10.0.0.2", 00:20:20.575 "trsvcid": "4420" 00:20:20.575 }, 00:20:20.575 "secure_channel": true 00:20:20.575 } 00:20:20.575 } 00:20:20.575 ] 00:20:20.575 } 00:20:20.575 ] 00:20:20.575 }' 00:20:20.575 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:20.575 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.575 03:01:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=372035 00:20:20.575 03:01:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:20.575 03:01:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 372035 00:20:20.575 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 372035 ']' 00:20:20.575 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.576 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:20.576 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.576 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:20.576 03:01:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.576 [2024-05-13 03:01:11.368354] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:20.576 [2024-05-13 03:01:11.368450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.837 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.837 [2024-05-13 03:01:11.405494] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:20.837 [2024-05-13 03:01:11.437351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.837 [2024-05-13 03:01:11.524244] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.837 [2024-05-13 03:01:11.524308] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.837 [2024-05-13 03:01:11.524324] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.837 [2024-05-13 03:01:11.524338] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.837 [2024-05-13 03:01:11.524349] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.837 [2024-05-13 03:01:11.524435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.124 [2024-05-13 03:01:11.755542] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.124 [2024-05-13 03:01:11.771479] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:21.124 [2024-05-13 03:01:11.787498] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:21.124 [2024-05-13 03:01:11.787575] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.124 [2024-05-13 03:01:11.796907] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=372189 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 372189 /var/tmp/bdevperf.sock 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 372189 ']' 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.689 03:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:21.689 "subsystems": [ 00:20:21.689 { 00:20:21.689 "subsystem": "keyring", 00:20:21.689 "config": [] 00:20:21.689 }, 00:20:21.689 { 00:20:21.689 "subsystem": "iobuf", 00:20:21.689 "config": [ 00:20:21.689 { 00:20:21.689 "method": "iobuf_set_options", 00:20:21.689 "params": { 00:20:21.689 "small_pool_count": 8192, 00:20:21.689 "large_pool_count": 1024, 00:20:21.689 "small_bufsize": 8192, 00:20:21.689 "large_bufsize": 135168 00:20:21.689 } 00:20:21.689 } 00:20:21.689 ] 00:20:21.689 }, 00:20:21.689 { 00:20:21.689 "subsystem": "sock", 00:20:21.689 "config": [ 00:20:21.689 { 00:20:21.689 "method": "sock_impl_set_options", 00:20:21.689 "params": { 00:20:21.689 "impl_name": "posix", 00:20:21.689 "recv_buf_size": 2097152, 00:20:21.689 "send_buf_size": 2097152, 00:20:21.689 "enable_recv_pipe": true, 00:20:21.689 "enable_quickack": false, 00:20:21.689 "enable_placement_id": 0, 00:20:21.689 "enable_zerocopy_send_server": true, 00:20:21.689 "enable_zerocopy_send_client": false, 00:20:21.689 "zerocopy_threshold": 0, 00:20:21.689 "tls_version": 0, 00:20:21.689 "enable_ktls": false 00:20:21.689 } 00:20:21.689 }, 00:20:21.689 { 00:20:21.689 "method": "sock_impl_set_options", 00:20:21.689 "params": { 00:20:21.689 "impl_name": "ssl", 00:20:21.689 "recv_buf_size": 4096, 00:20:21.689 "send_buf_size": 4096, 00:20:21.689 "enable_recv_pipe": true, 00:20:21.689 "enable_quickack": false, 00:20:21.689 "enable_placement_id": 0, 00:20:21.689 "enable_zerocopy_send_server": true, 00:20:21.689 "enable_zerocopy_send_client": false, 00:20:21.689 "zerocopy_threshold": 0, 00:20:21.689 "tls_version": 0, 00:20:21.689 "enable_ktls": false 00:20:21.689 } 00:20:21.689 } 00:20:21.689 ] 00:20:21.689 }, 00:20:21.689 { 00:20:21.689 "subsystem": "vmd", 00:20:21.689 "config": [] 00:20:21.689 }, 00:20:21.689 { 00:20:21.689 "subsystem": "accel", 00:20:21.690 "config": [ 00:20:21.690 { 00:20:21.690 "method": "accel_set_options", 00:20:21.690 "params": { 00:20:21.690 "small_cache_size": 128, 00:20:21.690 "large_cache_size": 16, 00:20:21.690 "task_count": 2048, 00:20:21.690 "sequence_count": 2048, 00:20:21.690 "buf_count": 2048 00:20:21.690 } 00:20:21.690 } 00:20:21.690 ] 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "subsystem": "bdev", 00:20:21.690 "config": [ 00:20:21.690 { 00:20:21.690 "method": "bdev_set_options", 00:20:21.690 "params": { 00:20:21.690 "bdev_io_pool_size": 65535, 00:20:21.690 "bdev_io_cache_size": 256, 00:20:21.690 "bdev_auto_examine": true, 00:20:21.690 "iobuf_small_cache_size": 128, 00:20:21.690 "iobuf_large_cache_size": 16 00:20:21.690 } 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "method": "bdev_raid_set_options", 00:20:21.690 "params": { 00:20:21.690 "process_window_size_kb": 1024 00:20:21.690 } 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "method": "bdev_iscsi_set_options", 00:20:21.690 "params": { 00:20:21.690 "timeout_sec": 30 00:20:21.690 } 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "method": "bdev_nvme_set_options", 00:20:21.690 "params": { 00:20:21.690 "action_on_timeout": "none", 00:20:21.690 "timeout_us": 0, 00:20:21.690 "timeout_admin_us": 0, 00:20:21.690 "keep_alive_timeout_ms": 10000, 00:20:21.690 "arbitration_burst": 0, 00:20:21.690 "low_priority_weight": 0, 00:20:21.690 "medium_priority_weight": 0, 00:20:21.690 "high_priority_weight": 0, 00:20:21.690 "nvme_adminq_poll_period_us": 10000, 00:20:21.690 "nvme_ioq_poll_period_us": 0, 00:20:21.690 "io_queue_requests": 512, 00:20:21.690 "delay_cmd_submit": true, 00:20:21.690 "transport_retry_count": 4, 00:20:21.690 "bdev_retry_count": 3, 00:20:21.690 "transport_ack_timeout": 0, 00:20:21.690 "ctrlr_loss_timeout_sec": 0, 00:20:21.690 "reconnect_delay_sec": 0, 00:20:21.690 "fast_io_fail_timeout_sec": 0, 00:20:21.690 "disable_auto_failback": false, 00:20:21.690 "generate_uuids": false, 00:20:21.690 "transport_tos": 0, 00:20:21.690 "nvme_error_stat": false, 00:20:21.690 "rdma_srq_size": 0, 00:20:21.690 "io_path_stat": false, 00:20:21.690 "allow_accel_sequence": false, 00:20:21.690 "rdma_max_cq_size": 0, 00:20:21.690 "rdma_cm_event_timeout_ms": 0, 00:20:21.690 "dhchap_digests": [ 00:20:21.690 "sha256", 00:20:21.690 "sha384", 00:20:21.690 "sha512" 00:20:21.690 ], 00:20:21.690 "dhchap_dhgroups": [ 00:20:21.690 "null", 00:20:21.690 "ffdhe2048", 00:20:21.690 "ffdhe3072", 00:20:21.690 "ffdhe4096", 00:20:21.690 "ffdhe6144", 00:20:21.690 "ffdhe8192" 00:20:21.690 ] 00:20:21.690 } 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "method": "bdev_nvme_attach_controller", 00:20:21.690 "params": { 00:20:21.690 "name": "TLSTEST", 00:20:21.690 "trtype": "TCP", 00:20:21.690 "adrfam": "IPv4", 00:20:21.690 "traddr": "10.0.0.2", 00:20:21.690 "trsvcid": "4420", 00:20:21.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.690 "prchk_reftag": false, 00:20:21.690 "prchk_guard": false, 00:20:21.690 "ctrlr_loss_timeout_sec": 0, 00:20:21.690 "reconnect_delay_sec": 0, 00:20:21.690 "fast_io_fail_timeout_sec": 0, 00:20:21.690 "psk": "/tmp/tmp.urbfOxNyDE", 00:20:21.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.690 "hdgst": false, 00:20:21.690 "ddgst": false 00:20:21.690 } 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "method": "bdev_nvme_set_hotplug", 00:20:21.690 "params": { 00:20:21.690 "period_us": 100000, 00:20:21.690 "enable": false 00:20:21.690 } 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "method": "bdev_wait_for_examine" 00:20:21.690 } 00:20:21.690 ] 00:20:21.690 }, 00:20:21.690 { 00:20:21.690 "subsystem": "nbd", 00:20:21.690 "config": [] 00:20:21.690 } 00:20:21.690 ] 00:20:21.690 }' 00:20:21.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.690 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:21.690 03:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.690 [2024-05-13 03:01:12.416644] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:21.690 [2024-05-13 03:01:12.416742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372189 ] 00:20:21.690 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.690 [2024-05-13 03:01:12.448346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:21.690 [2024-05-13 03:01:12.475756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.948 [2024-05-13 03:01:12.562298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.948 [2024-05-13 03:01:12.725268] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.948 [2024-05-13 03:01:12.725408] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:22.929 03:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:22.929 03:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:22.929 03:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:22.929 Running I/O for 10 seconds... 00:20:32.893 00:20:32.893 Latency(us) 00:20:32.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.893 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:32.893 Verification LBA range: start 0x0 length 0x2000 00:20:32.893 TLSTESTn1 : 10.14 779.62 3.05 0.00 0.00 163298.72 6941.96 237677.23 00:20:32.893 =================================================================================================================== 00:20:32.893 Total : 779.62 3.05 0.00 0.00 163298.72 6941.96 237677.23 00:20:32.893 0 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 372189 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 372189 ']' 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 372189 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 372189 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 372189' 00:20:33.152 killing process with pid 372189 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 372189 00:20:33.152 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.152 00:20:33.152 Latency(us) 00:20:33.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.152 =================================================================================================================== 00:20:33.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.152 [2024-05-13 03:01:23.735568] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:33.152 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 372189 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 372035 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 372035 ']' 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 372035 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 372035 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 372035' 00:20:33.409 killing process with pid 372035 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 372035 00:20:33.409 [2024-05-13 03:01:23.991496] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:33.409 [2024-05-13 03:01:23.991560] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:33.409 03:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 372035 00:20:33.667 03:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=373521 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 373521 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 373521 ']' 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:33.668 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.668 [2024-05-13 03:01:24.285490] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:33.668 [2024-05-13 03:01:24.285570] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.668 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.668 [2024-05-13 03:01:24.322102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:33.668 [2024-05-13 03:01:24.351627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.668 [2024-05-13 03:01:24.440437] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.668 [2024-05-13 03:01:24.440499] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.668 [2024-05-13 03:01:24.440515] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.668 [2024-05-13 03:01:24.440535] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.668 [2024-05-13 03:01:24.440547] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.668 [2024-05-13 03:01:24.440576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.925 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:33.925 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:33.925 03:01:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.926 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.926 03:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.926 03:01:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.926 03:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.urbfOxNyDE 00:20:33.926 03:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.urbfOxNyDE 00:20:33.926 03:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:34.183 [2024-05-13 03:01:24.811510] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.183 03:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.439 03:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.697 [2024-05-13 03:01:25.340901] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:34.697 [2024-05-13 03:01:25.340986] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.697 [2024-05-13 03:01:25.341197] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.697 03:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.954 malloc0 00:20:34.954 03:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:35.211 03:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.urbfOxNyDE 00:20:35.468 [2024-05-13 03:01:26.091336] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:35.468 03:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=373802 00:20:35.468 03:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:35.468 03:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.468 03:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 373802 /var/tmp/bdevperf.sock 00:20:35.468 03:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 373802 ']' 00:20:35.468 03:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.468 03:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:35.468 03:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.469 03:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:35.469 03:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.469 [2024-05-13 03:01:26.142036] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:35.469 [2024-05-13 03:01:26.142118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373802 ] 00:20:35.469 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.469 [2024-05-13 03:01:26.173239] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:35.469 [2024-05-13 03:01:26.200437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.726 [2024-05-13 03:01:26.287254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.726 03:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:35.726 03:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:35.726 03:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.urbfOxNyDE 00:20:35.983 03:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:36.240 [2024-05-13 03:01:26.859373] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.240 nvme0n1 00:20:36.240 03:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:36.508 Running I/O for 1 seconds... 00:20:37.439 00:20:37.439 Latency(us) 00:20:37.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.439 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:37.439 Verification LBA range: start 0x0 length 0x2000 00:20:37.439 nvme0n1 : 1.12 911.33 3.56 0.00 0.00 134969.21 6650.69 182529.90 00:20:37.439 =================================================================================================================== 00:20:37.439 Total : 911.33 3.56 0.00 0.00 134969.21 6650.69 182529.90 00:20:37.439 0 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 373802 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 373802 ']' 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 373802 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 373802 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 373802' 00:20:37.439 killing process with pid 373802 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 373802 00:20:37.439 Received shutdown signal, test time was about 1.000000 seconds 00:20:37.439 00:20:37.439 Latency(us) 00:20:37.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.439 =================================================================================================================== 00:20:37.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.439 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 373802 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 373521 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 373521 ']' 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 373521 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 373521 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 373521' 00:20:37.697 killing process with pid 373521 00:20:37.697 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 373521 00:20:37.697 [2024-05-13 03:01:28.484893] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:37.698 [2024-05-13 03:01:28.484950] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:37.698 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 373521 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=374082 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 374082 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 374082 ']' 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:37.956 03:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.215 [2024-05-13 03:01:28.794390] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:38.215 [2024-05-13 03:01:28.794492] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.215 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.215 [2024-05-13 03:01:28.831784] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:38.215 [2024-05-13 03:01:28.863837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.215 [2024-05-13 03:01:28.951484] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.215 [2024-05-13 03:01:28.951546] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.215 [2024-05-13 03:01:28.951562] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.215 [2024-05-13 03:01:28.951576] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.215 [2024-05-13 03:01:28.951589] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.215 [2024-05-13 03:01:28.951628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.473 [2024-05-13 03:01:29.100190] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.473 malloc0 00:20:38.473 [2024-05-13 03:01:29.131540] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:38.473 [2024-05-13 03:01:29.131755] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.473 [2024-05-13 03:01:29.131961] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=374172 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 374172 /var/tmp/bdevperf.sock 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 374172 ']' 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:38.473 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.473 [2024-05-13 03:01:29.205581] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:38.473 [2024-05-13 03:01:29.205669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374172 ] 00:20:38.473 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.473 [2024-05-13 03:01:29.239913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:38.473 [2024-05-13 03:01:29.268261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.731 [2024-05-13 03:01:29.357454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.731 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:38.731 03:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:38.731 03:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.urbfOxNyDE 00:20:38.990 03:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:39.248 [2024-05-13 03:01:29.980892] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.506 nvme0n1 00:20:39.506 03:01:30 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.506 Running I/O for 1 seconds... 00:20:40.880 00:20:40.880 Latency(us) 00:20:40.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.880 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:40.880 Verification LBA range: start 0x0 length 0x2000 00:20:40.880 nvme0n1 : 1.10 992.40 3.88 0.00 0.00 124263.48 6505.05 183306.62 00:20:40.880 =================================================================================================================== 00:20:40.880 Total : 992.40 3.88 0.00 0.00 124263.48 6505.05 183306.62 00:20:40.880 0 00:20:40.880 03:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:40.880 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.880 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.880 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.880 03:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:40.880 "subsystems": [ 00:20:40.880 { 00:20:40.880 "subsystem": "keyring", 00:20:40.880 "config": [ 00:20:40.880 { 00:20:40.880 "method": "keyring_file_add_key", 00:20:40.880 "params": { 00:20:40.880 "name": "key0", 00:20:40.880 "path": "/tmp/tmp.urbfOxNyDE" 00:20:40.880 } 00:20:40.880 } 00:20:40.880 ] 00:20:40.880 }, 00:20:40.880 { 00:20:40.880 "subsystem": "iobuf", 00:20:40.880 "config": [ 00:20:40.880 { 00:20:40.880 "method": "iobuf_set_options", 00:20:40.880 "params": { 00:20:40.880 "small_pool_count": 8192, 00:20:40.880 "large_pool_count": 1024, 00:20:40.880 "small_bufsize": 8192, 00:20:40.880 "large_bufsize": 135168 00:20:40.880 } 00:20:40.880 } 00:20:40.880 ] 00:20:40.880 }, 00:20:40.880 { 00:20:40.880 "subsystem": "sock", 00:20:40.880 "config": [ 00:20:40.880 { 00:20:40.880 "method": "sock_impl_set_options", 00:20:40.880 "params": { 00:20:40.880 "impl_name": "posix", 00:20:40.880 "recv_buf_size": 2097152, 00:20:40.880 "send_buf_size": 2097152, 00:20:40.880 "enable_recv_pipe": true, 00:20:40.880 "enable_quickack": false, 00:20:40.880 "enable_placement_id": 0, 00:20:40.880 "enable_zerocopy_send_server": true, 00:20:40.880 "enable_zerocopy_send_client": false, 00:20:40.880 "zerocopy_threshold": 0, 00:20:40.880 "tls_version": 0, 00:20:40.880 "enable_ktls": false 00:20:40.880 } 00:20:40.880 }, 00:20:40.880 { 00:20:40.880 "method": "sock_impl_set_options", 00:20:40.880 "params": { 00:20:40.880 "impl_name": "ssl", 00:20:40.880 "recv_buf_size": 4096, 00:20:40.880 "send_buf_size": 4096, 00:20:40.880 "enable_recv_pipe": true, 00:20:40.880 "enable_quickack": false, 00:20:40.880 "enable_placement_id": 0, 00:20:40.880 "enable_zerocopy_send_server": true, 00:20:40.880 "enable_zerocopy_send_client": false, 00:20:40.880 "zerocopy_threshold": 0, 00:20:40.880 "tls_version": 0, 00:20:40.880 "enable_ktls": false 00:20:40.880 } 00:20:40.880 } 00:20:40.880 ] 00:20:40.880 }, 00:20:40.880 { 00:20:40.880 "subsystem": "vmd", 00:20:40.880 "config": [] 00:20:40.880 }, 00:20:40.880 { 00:20:40.880 "subsystem": "accel", 00:20:40.880 "config": [ 00:20:40.880 { 00:20:40.880 "method": "accel_set_options", 00:20:40.880 "params": { 00:20:40.880 "small_cache_size": 128, 00:20:40.880 "large_cache_size": 16, 00:20:40.880 "task_count": 2048, 00:20:40.880 "sequence_count": 2048, 00:20:40.880 "buf_count": 2048 00:20:40.880 } 00:20:40.880 } 00:20:40.880 ] 00:20:40.880 }, 00:20:40.880 { 00:20:40.880 "subsystem": "bdev", 00:20:40.880 "config": [ 00:20:40.880 { 00:20:40.880 "method": "bdev_set_options", 00:20:40.880 "params": { 00:20:40.880 "bdev_io_pool_size": 65535, 00:20:40.880 "bdev_io_cache_size": 256, 00:20:40.880 "bdev_auto_examine": true, 00:20:40.880 "iobuf_small_cache_size": 128, 00:20:40.880 "iobuf_large_cache_size": 16 00:20:40.880 } 00:20:40.880 }, 00:20:40.880 { 00:20:40.880 "method": "bdev_raid_set_options", 00:20:40.880 "params": { 00:20:40.880 "process_window_size_kb": 1024 00:20:40.880 } 00:20:40.880 }, 00:20:40.880 { 00:20:40.880 "method": "bdev_iscsi_set_options", 00:20:40.880 "params": { 00:20:40.880 "timeout_sec": 30 00:20:40.880 } 00:20:40.880 }, 00:20:40.880 { 00:20:40.880 "method": "bdev_nvme_set_options", 00:20:40.880 "params": { 00:20:40.880 "action_on_timeout": "none", 00:20:40.880 "timeout_us": 0, 00:20:40.880 "timeout_admin_us": 0, 00:20:40.880 "keep_alive_timeout_ms": 10000, 00:20:40.880 "arbitration_burst": 0, 00:20:40.880 "low_priority_weight": 0, 00:20:40.880 "medium_priority_weight": 0, 00:20:40.880 "high_priority_weight": 0, 00:20:40.880 "nvme_adminq_poll_period_us": 10000, 00:20:40.880 "nvme_ioq_poll_period_us": 0, 00:20:40.880 "io_queue_requests": 0, 00:20:40.881 "delay_cmd_submit": true, 00:20:40.881 "transport_retry_count": 4, 00:20:40.881 "bdev_retry_count": 3, 00:20:40.881 "transport_ack_timeout": 0, 00:20:40.881 "ctrlr_loss_timeout_sec": 0, 00:20:40.881 "reconnect_delay_sec": 0, 00:20:40.881 "fast_io_fail_timeout_sec": 0, 00:20:40.881 "disable_auto_failback": false, 00:20:40.881 "generate_uuids": false, 00:20:40.881 "transport_tos": 0, 00:20:40.881 "nvme_error_stat": false, 00:20:40.881 "rdma_srq_size": 0, 00:20:40.881 "io_path_stat": false, 00:20:40.881 "allow_accel_sequence": false, 00:20:40.881 "rdma_max_cq_size": 0, 00:20:40.881 "rdma_cm_event_timeout_ms": 0, 00:20:40.881 "dhchap_digests": [ 00:20:40.881 "sha256", 00:20:40.881 "sha384", 00:20:40.881 "sha512" 00:20:40.881 ], 00:20:40.881 "dhchap_dhgroups": [ 00:20:40.881 "null", 00:20:40.881 "ffdhe2048", 00:20:40.881 "ffdhe3072", 00:20:40.881 "ffdhe4096", 00:20:40.881 "ffdhe6144", 00:20:40.881 "ffdhe8192" 00:20:40.881 ] 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "bdev_nvme_set_hotplug", 00:20:40.881 "params": { 00:20:40.881 "period_us": 100000, 00:20:40.881 "enable": false 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "bdev_malloc_create", 00:20:40.881 "params": { 00:20:40.881 "name": "malloc0", 00:20:40.881 "num_blocks": 8192, 00:20:40.881 "block_size": 4096, 00:20:40.881 "physical_block_size": 4096, 00:20:40.881 "uuid": "0f2ad693-f5b9-46a0-9668-b15526c6fa92", 00:20:40.881 "optimal_io_boundary": 0 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "bdev_wait_for_examine" 00:20:40.881 } 00:20:40.881 ] 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "subsystem": "nbd", 00:20:40.881 "config": [] 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "subsystem": "scheduler", 00:20:40.881 "config": [ 00:20:40.881 { 00:20:40.881 "method": "framework_set_scheduler", 00:20:40.881 "params": { 00:20:40.881 "name": "static" 00:20:40.881 } 00:20:40.881 } 00:20:40.881 ] 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "subsystem": "nvmf", 00:20:40.881 "config": [ 00:20:40.881 { 00:20:40.881 "method": "nvmf_set_config", 00:20:40.881 "params": { 00:20:40.881 "discovery_filter": "match_any", 00:20:40.881 "admin_cmd_passthru": { 00:20:40.881 "identify_ctrlr": false 00:20:40.881 } 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "nvmf_set_max_subsystems", 00:20:40.881 "params": { 00:20:40.881 "max_subsystems": 1024 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "nvmf_set_crdt", 00:20:40.881 "params": { 00:20:40.881 "crdt1": 0, 00:20:40.881 "crdt2": 0, 00:20:40.881 "crdt3": 0 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "nvmf_create_transport", 00:20:40.881 "params": { 00:20:40.881 "trtype": "TCP", 00:20:40.881 "max_queue_depth": 128, 00:20:40.881 "max_io_qpairs_per_ctrlr": 127, 00:20:40.881 "in_capsule_data_size": 4096, 00:20:40.881 "max_io_size": 131072, 00:20:40.881 "io_unit_size": 131072, 00:20:40.881 "max_aq_depth": 128, 00:20:40.881 "num_shared_buffers": 511, 00:20:40.881 "buf_cache_size": 4294967295, 00:20:40.881 "dif_insert_or_strip": false, 00:20:40.881 "zcopy": false, 00:20:40.881 "c2h_success": false, 00:20:40.881 "sock_priority": 0, 00:20:40.881 "abort_timeout_sec": 1, 00:20:40.881 "ack_timeout": 0, 00:20:40.881 "data_wr_pool_size": 0 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "nvmf_create_subsystem", 00:20:40.881 "params": { 00:20:40.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.881 "allow_any_host": false, 00:20:40.881 "serial_number": "00000000000000000000", 00:20:40.881 "model_number": "SPDK bdev Controller", 00:20:40.881 "max_namespaces": 32, 00:20:40.881 "min_cntlid": 1, 00:20:40.881 "max_cntlid": 65519, 00:20:40.881 "ana_reporting": false 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "nvmf_subsystem_add_host", 00:20:40.881 "params": { 00:20:40.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.881 "host": "nqn.2016-06.io.spdk:host1", 00:20:40.881 "psk": "key0" 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "nvmf_subsystem_add_ns", 00:20:40.881 "params": { 00:20:40.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.881 "namespace": { 00:20:40.881 "nsid": 1, 00:20:40.881 "bdev_name": "malloc0", 00:20:40.881 "nguid": "0F2AD693F5B946A09668B15526C6FA92", 00:20:40.881 "uuid": "0f2ad693-f5b9-46a0-9668-b15526c6fa92", 00:20:40.881 "no_auto_visible": false 00:20:40.881 } 00:20:40.881 } 00:20:40.881 }, 00:20:40.881 { 00:20:40.881 "method": "nvmf_subsystem_add_listener", 00:20:40.881 "params": { 00:20:40.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.881 "listen_address": { 00:20:40.881 "trtype": "TCP", 00:20:40.881 "adrfam": "IPv4", 00:20:40.881 "traddr": "10.0.0.2", 00:20:40.881 "trsvcid": "4420" 00:20:40.881 }, 00:20:40.881 "secure_channel": true 00:20:40.881 } 00:20:40.881 } 00:20:40.881 ] 00:20:40.881 } 00:20:40.881 ] 00:20:40.881 }' 00:20:40.881 03:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:41.140 03:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:41.140 "subsystems": [ 00:20:41.140 { 00:20:41.140 "subsystem": "keyring", 00:20:41.140 "config": [ 00:20:41.140 { 00:20:41.140 "method": "keyring_file_add_key", 00:20:41.140 "params": { 00:20:41.140 "name": "key0", 00:20:41.140 "path": "/tmp/tmp.urbfOxNyDE" 00:20:41.140 } 00:20:41.140 } 00:20:41.140 ] 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "subsystem": "iobuf", 00:20:41.140 "config": [ 00:20:41.140 { 00:20:41.140 "method": "iobuf_set_options", 00:20:41.140 "params": { 00:20:41.140 "small_pool_count": 8192, 00:20:41.140 "large_pool_count": 1024, 00:20:41.140 "small_bufsize": 8192, 00:20:41.140 "large_bufsize": 135168 00:20:41.140 } 00:20:41.140 } 00:20:41.140 ] 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "subsystem": "sock", 00:20:41.140 "config": [ 00:20:41.140 { 00:20:41.140 "method": "sock_impl_set_options", 00:20:41.140 "params": { 00:20:41.140 "impl_name": "posix", 00:20:41.140 "recv_buf_size": 2097152, 00:20:41.140 "send_buf_size": 2097152, 00:20:41.140 "enable_recv_pipe": true, 00:20:41.140 "enable_quickack": false, 00:20:41.140 "enable_placement_id": 0, 00:20:41.140 "enable_zerocopy_send_server": true, 00:20:41.140 "enable_zerocopy_send_client": false, 00:20:41.140 "zerocopy_threshold": 0, 00:20:41.140 "tls_version": 0, 00:20:41.140 "enable_ktls": false 00:20:41.140 } 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "method": "sock_impl_set_options", 00:20:41.140 "params": { 00:20:41.140 "impl_name": "ssl", 00:20:41.140 "recv_buf_size": 4096, 00:20:41.140 "send_buf_size": 4096, 00:20:41.140 "enable_recv_pipe": true, 00:20:41.140 "enable_quickack": false, 00:20:41.140 "enable_placement_id": 0, 00:20:41.140 "enable_zerocopy_send_server": true, 00:20:41.140 "enable_zerocopy_send_client": false, 00:20:41.140 "zerocopy_threshold": 0, 00:20:41.140 "tls_version": 0, 00:20:41.140 "enable_ktls": false 00:20:41.140 } 00:20:41.140 } 00:20:41.140 ] 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "subsystem": "vmd", 00:20:41.140 "config": [] 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "subsystem": "accel", 00:20:41.140 "config": [ 00:20:41.140 { 00:20:41.140 "method": "accel_set_options", 00:20:41.140 "params": { 00:20:41.140 "small_cache_size": 128, 00:20:41.140 "large_cache_size": 16, 00:20:41.140 "task_count": 2048, 00:20:41.140 "sequence_count": 2048, 00:20:41.140 "buf_count": 2048 00:20:41.140 } 00:20:41.140 } 00:20:41.140 ] 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "subsystem": "bdev", 00:20:41.140 "config": [ 00:20:41.140 { 00:20:41.140 "method": "bdev_set_options", 00:20:41.140 "params": { 00:20:41.140 "bdev_io_pool_size": 65535, 00:20:41.140 "bdev_io_cache_size": 256, 00:20:41.140 "bdev_auto_examine": true, 00:20:41.140 "iobuf_small_cache_size": 128, 00:20:41.140 "iobuf_large_cache_size": 16 00:20:41.140 } 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "method": "bdev_raid_set_options", 00:20:41.140 "params": { 00:20:41.140 "process_window_size_kb": 1024 00:20:41.140 } 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "method": "bdev_iscsi_set_options", 00:20:41.140 "params": { 00:20:41.140 "timeout_sec": 30 00:20:41.140 } 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "method": "bdev_nvme_set_options", 00:20:41.140 "params": { 00:20:41.140 "action_on_timeout": "none", 00:20:41.140 "timeout_us": 0, 00:20:41.140 "timeout_admin_us": 0, 00:20:41.140 "keep_alive_timeout_ms": 10000, 00:20:41.140 "arbitration_burst": 0, 00:20:41.140 "low_priority_weight": 0, 00:20:41.140 "medium_priority_weight": 0, 00:20:41.140 "high_priority_weight": 0, 00:20:41.140 "nvme_adminq_poll_period_us": 10000, 00:20:41.140 "nvme_ioq_poll_period_us": 0, 00:20:41.140 "io_queue_requests": 512, 00:20:41.140 "delay_cmd_submit": true, 00:20:41.140 "transport_retry_count": 4, 00:20:41.140 "bdev_retry_count": 3, 00:20:41.140 "transport_ack_timeout": 0, 00:20:41.140 "ctrlr_loss_timeout_sec": 0, 00:20:41.140 "reconnect_delay_sec": 0, 00:20:41.140 "fast_io_fail_timeout_sec": 0, 00:20:41.140 "disable_auto_failback": false, 00:20:41.140 "generate_uuids": false, 00:20:41.140 "transport_tos": 0, 00:20:41.140 "nvme_error_stat": false, 00:20:41.140 "rdma_srq_size": 0, 00:20:41.140 "io_path_stat": false, 00:20:41.140 "allow_accel_sequence": false, 00:20:41.140 "rdma_max_cq_size": 0, 00:20:41.140 "rdma_cm_event_timeout_ms": 0, 00:20:41.140 "dhchap_digests": [ 00:20:41.140 "sha256", 00:20:41.140 "sha384", 00:20:41.140 "sha512" 00:20:41.140 ], 00:20:41.140 "dhchap_dhgroups": [ 00:20:41.140 "null", 00:20:41.140 "ffdhe2048", 00:20:41.140 "ffdhe3072", 00:20:41.140 "ffdhe4096", 00:20:41.140 "ffdhe6144", 00:20:41.140 "ffdhe8192" 00:20:41.140 ] 00:20:41.140 } 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "method": "bdev_nvme_attach_controller", 00:20:41.140 "params": { 00:20:41.140 "name": "nvme0", 00:20:41.140 "trtype": "TCP", 00:20:41.140 "adrfam": "IPv4", 00:20:41.140 "traddr": "10.0.0.2", 00:20:41.140 "trsvcid": "4420", 00:20:41.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.140 "prchk_reftag": false, 00:20:41.140 "prchk_guard": false, 00:20:41.140 "ctrlr_loss_timeout_sec": 0, 00:20:41.140 "reconnect_delay_sec": 0, 00:20:41.140 "fast_io_fail_timeout_sec": 0, 00:20:41.140 "psk": "key0", 00:20:41.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.140 "hdgst": false, 00:20:41.140 "ddgst": false 00:20:41.140 } 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "method": "bdev_nvme_set_hotplug", 00:20:41.140 "params": { 00:20:41.140 "period_us": 100000, 00:20:41.140 "enable": false 00:20:41.140 } 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "method": "bdev_enable_histogram", 00:20:41.140 "params": { 00:20:41.140 "name": "nvme0n1", 00:20:41.140 "enable": true 00:20:41.140 } 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "method": "bdev_wait_for_examine" 00:20:41.140 } 00:20:41.140 ] 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "subsystem": "nbd", 00:20:41.140 "config": [] 00:20:41.140 } 00:20:41.140 ] 00:20:41.140 }' 00:20:41.140 03:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 374172 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 374172 ']' 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 374172 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 374172 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 374172' 00:20:41.141 killing process with pid 374172 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 374172 00:20:41.141 Received shutdown signal, test time was about 1.000000 seconds 00:20:41.141 00:20:41.141 Latency(us) 00:20:41.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.141 =================================================================================================================== 00:20:41.141 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.141 03:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 374172 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 374082 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 374082 ']' 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 374082 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 374082 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 374082' 00:20:41.399 killing process with pid 374082 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 374082 00:20:41.399 [2024-05-13 03:01:32.038234] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:41.399 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 374082 00:20:41.658 03:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:41.658 03:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.658 03:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:41.658 "subsystems": [ 00:20:41.658 { 00:20:41.658 "subsystem": "keyring", 00:20:41.658 "config": [ 00:20:41.658 { 00:20:41.658 "method": "keyring_file_add_key", 00:20:41.658 "params": { 00:20:41.658 "name": "key0", 00:20:41.658 "path": "/tmp/tmp.urbfOxNyDE" 00:20:41.658 } 00:20:41.658 } 00:20:41.658 ] 00:20:41.658 }, 00:20:41.658 { 00:20:41.658 "subsystem": "iobuf", 00:20:41.658 "config": [ 00:20:41.658 { 00:20:41.658 "method": "iobuf_set_options", 00:20:41.658 "params": { 00:20:41.658 "small_pool_count": 8192, 00:20:41.658 "large_pool_count": 1024, 00:20:41.658 "small_bufsize": 8192, 00:20:41.658 "large_bufsize": 135168 00:20:41.658 } 00:20:41.658 } 00:20:41.658 ] 00:20:41.658 }, 00:20:41.658 { 00:20:41.658 "subsystem": "sock", 00:20:41.658 "config": [ 00:20:41.658 { 00:20:41.658 "method": "sock_impl_set_options", 00:20:41.658 "params": { 00:20:41.658 "impl_name": "posix", 00:20:41.658 "recv_buf_size": 2097152, 00:20:41.658 "send_buf_size": 2097152, 00:20:41.658 "enable_recv_pipe": true, 00:20:41.658 "enable_quickack": false, 00:20:41.658 "enable_placement_id": 0, 00:20:41.658 "enable_zerocopy_send_server": true, 00:20:41.658 "enable_zerocopy_send_client": false, 00:20:41.658 "zerocopy_threshold": 0, 00:20:41.658 "tls_version": 0, 00:20:41.658 "enable_ktls": false 00:20:41.658 } 00:20:41.658 }, 00:20:41.658 { 00:20:41.658 "method": "sock_impl_set_options", 00:20:41.658 "params": { 00:20:41.658 "impl_name": "ssl", 00:20:41.658 "recv_buf_size": 4096, 00:20:41.658 "send_buf_size": 4096, 00:20:41.658 "enable_recv_pipe": true, 00:20:41.658 "enable_quickack": false, 00:20:41.658 "enable_placement_id": 0, 00:20:41.658 "enable_zerocopy_send_server": true, 00:20:41.658 "enable_zerocopy_send_client": false, 00:20:41.658 "zerocopy_threshold": 0, 00:20:41.658 "tls_version": 0, 00:20:41.658 "enable_ktls": false 00:20:41.658 } 00:20:41.658 } 00:20:41.658 ] 00:20:41.658 }, 00:20:41.658 { 00:20:41.658 "subsystem": "vmd", 00:20:41.658 "config": [] 00:20:41.658 }, 00:20:41.658 { 00:20:41.658 "subsystem": "accel", 00:20:41.658 "config": [ 00:20:41.658 { 00:20:41.658 "method": "accel_set_options", 00:20:41.658 "params": { 00:20:41.658 "small_cache_size": 128, 00:20:41.658 "large_cache_size": 16, 00:20:41.658 "task_count": 2048, 00:20:41.658 "sequence_count": 2048, 00:20:41.658 "buf_count": 2048 00:20:41.658 } 00:20:41.658 } 00:20:41.658 ] 00:20:41.658 }, 00:20:41.658 { 00:20:41.658 "subsystem": "bdev", 00:20:41.658 "config": [ 00:20:41.658 { 00:20:41.658 "method": "bdev_set_options", 00:20:41.658 "params": { 00:20:41.658 "bdev_io_pool_size": 65535, 00:20:41.658 "bdev_io_cache_size": 256, 00:20:41.658 "bdev_auto_examine": true, 00:20:41.658 "iobuf_small_cache_size": 128, 00:20:41.658 "iobuf_large_cache_size": 16 00:20:41.658 } 00:20:41.658 }, 00:20:41.658 { 00:20:41.658 "method": "bdev_raid_set_options", 00:20:41.658 "params": { 00:20:41.658 "process_window_size_kb": 1024 00:20:41.658 } 00:20:41.658 }, 00:20:41.658 { 00:20:41.658 "method": "bdev_iscsi_set_options", 00:20:41.658 "params": { 00:20:41.658 "timeout_sec": 30 00:20:41.658 } 00:20:41.658 }, 00:20:41.658 { 00:20:41.658 "method": "bdev_nvme_set_options", 00:20:41.658 "params": { 00:20:41.658 "action_on_timeout": "none", 00:20:41.658 "timeout_us": 0, 00:20:41.658 "timeout_admin_us": 0, 00:20:41.658 "keep_alive_timeout_ms": 10000, 00:20:41.658 "arbitration_burst": 0, 00:20:41.658 "low_priority_weight": 0, 00:20:41.658 "medium_priority_weight": 0, 00:20:41.658 "high_priority_weight": 0, 00:20:41.658 "nvme_adminq_poll_period_us": 10000, 00:20:41.658 "nvme_ioq_poll_period_us": 0, 00:20:41.658 "io_queue_requests": 0, 00:20:41.658 "delay_cmd_submit": true, 00:20:41.658 "transport_retry_count": 4, 00:20:41.658 "bdev_retry_count": 3, 00:20:41.658 "transport_ack_timeout": 0, 00:20:41.658 "ctrlr_loss_timeout_sec": 0, 00:20:41.658 "reconnect_delay_sec": 0, 00:20:41.658 "fast_io_fail_timeout_sec": 0, 00:20:41.658 "disable_auto_failback": false, 00:20:41.658 "generate_uuids": false, 00:20:41.658 "transport_tos": 0, 00:20:41.658 "nvme_error_stat": false, 00:20:41.658 "rdma_srq_size": 0, 00:20:41.658 "io_path_stat": false, 00:20:41.658 "allow_accel_sequence": false, 00:20:41.658 "rdma_max_cq_size": 0, 00:20:41.658 "rdma_cm_event_timeout_ms": 0, 00:20:41.658 "dhchap_digests": [ 00:20:41.658 "sha256", 00:20:41.658 "sha384", 00:20:41.658 "sha512" 00:20:41.658 ], 00:20:41.658 "dhchap_dhgroups": [ 00:20:41.658 "null", 00:20:41.658 "ffdhe2048", 00:20:41.658 "ffdhe3072", 00:20:41.658 "ffdhe4096", 00:20:41.658 "ffdhe6144", 00:20:41.658 "ffdhe8192" 00:20:41.658 ] 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "bdev_nvme_set_hotplug", 00:20:41.659 "params": { 00:20:41.659 "period_us": 100000, 00:20:41.659 "enable": false 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "bdev_malloc_create", 00:20:41.659 "params": { 00:20:41.659 "name": "malloc0", 00:20:41.659 "num_blocks": 8192, 00:20:41.659 "block_size": 4096, 00:20:41.659 "physical_block_size": 4096, 00:20:41.659 "uuid": "0f2ad693-f5b9-46a0-9668-b15526c6fa92", 00:20:41.659 "optimal_io_boundary": 0 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "bdev_wait_for_examine" 00:20:41.659 } 00:20:41.659 ] 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "subsystem": "nbd", 00:20:41.659 "config": [] 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "subsystem": "scheduler", 00:20:41.659 "config": [ 00:20:41.659 { 00:20:41.659 "method": "framework_set_scheduler", 00:20:41.659 "params": { 00:20:41.659 "name": "static" 00:20:41.659 } 00:20:41.659 } 00:20:41.659 ] 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "subsystem": "nvmf", 00:20:41.659 "config": [ 00:20:41.659 { 00:20:41.659 "method": "nvmf_set_config", 00:20:41.659 "params": { 00:20:41.659 "discovery_filter": "match_any", 00:20:41.659 "admin_cmd_passthru": { 00:20:41.659 "identify_ctrlr": false 00:20:41.659 } 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "nvmf_set_max_subsystems", 00:20:41.659 "params": { 00:20:41.659 "max_subsystems": 1024 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "nvmf_set_crdt", 00:20:41.659 "params": { 00:20:41.659 "crdt1": 0, 00:20:41.659 "crdt2": 0, 00:20:41.659 "crdt3": 0 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "nvmf_create_transport", 00:20:41.659 "params": { 00:20:41.659 "trtype": "TCP", 00:20:41.659 "max_queue_depth": 128, 00:20:41.659 "max_io_qpairs_per_ctrlr": 127, 00:20:41.659 "in_capsule_data_size": 4096, 00:20:41.659 "max_io_size": 131072, 00:20:41.659 "io_unit_size": 131072, 00:20:41.659 "max_aq_depth": 128, 00:20:41.659 "num_shared_buffers": 511, 00:20:41.659 "buf_cache_size": 4294967295, 00:20:41.659 "dif_insert_or_strip": false, 00:20:41.659 "zcopy": false, 00:20:41.659 "c2h_success": false, 00:20:41.659 "sock_priority": 0, 00:20:41.659 "abort_timeout_sec": 1, 00:20:41.659 "ack_timeout": 0, 00:20:41.659 "data_wr_pool_size": 0 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "nvmf_create_subsystem", 00:20:41.659 "params": { 00:20:41.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.659 "allow_any_host": false, 00:20:41.659 "serial_number": "00000000000000000000", 00:20:41.659 "model_number": "SPDK bdev Controller", 00:20:41.659 "max_namespaces": 32, 00:20:41.659 "min_cntlid": 1, 00:20:41.659 "max_cntlid": 65519, 00:20:41.659 "ana_reporting": false 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "nvmf_subsystem_add_host", 00:20:41.659 "params": { 00:20:41.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.659 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.659 "psk": "key0" 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "nvmf_subsystem_add_ns", 00:20:41.659 "params": { 00:20:41.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.659 "namespace": { 00:20:41.659 "nsid": 1, 00:20:41.659 "bdev_name": "malloc0", 00:20:41.659 "nguid": "0F2AD693F5B946A09668B15526C6FA92", 00:20:41.659 "uuid": "0f2ad693-f5b9-46a0-9668-b15526c6fa92", 00:20:41.659 "no_auto_visible": false 00:20:41.659 } 00:20:41.659 } 00:20:41.659 }, 00:20:41.659 { 00:20:41.659 "method": "nvmf_subsystem_add_listener", 00:20:41.659 "params": { 00:20:41.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.659 "listen_address": { 00:20:41.659 "trtype": "TCP", 00:20:41.659 "adrfam": "IPv4", 00:20:41.659 "traddr": "10.0.0.2", 00:20:41.659 "trsvcid": "4420" 00:20:41.659 }, 00:20:41.659 "secure_channel": true 00:20:41.659 } 00:20:41.659 } 00:20:41.659 ] 00:20:41.659 } 00:20:41.659 ] 00:20:41.659 }' 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=374514 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 374514 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 374514 ']' 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:41.659 03:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.659 [2024-05-13 03:01:32.350392] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:41.659 [2024-05-13 03:01:32.350477] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.659 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.659 [2024-05-13 03:01:32.393332] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:41.659 [2024-05-13 03:01:32.423839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.917 [2024-05-13 03:01:32.513465] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.917 [2024-05-13 03:01:32.513527] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.917 [2024-05-13 03:01:32.513542] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.917 [2024-05-13 03:01:32.513557] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.917 [2024-05-13 03:01:32.513568] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.917 [2024-05-13 03:01:32.513657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.178 [2024-05-13 03:01:32.743214] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.178 [2024-05-13 03:01:32.775196] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:42.178 [2024-05-13 03:01:32.775274] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.178 [2024-05-13 03:01:32.790830] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=374674 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 374674 /var/tmp/bdevperf.sock 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 374674 ']' 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:42.779 03:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:42.779 "subsystems": [ 00:20:42.779 { 00:20:42.779 "subsystem": "keyring", 00:20:42.779 "config": [ 00:20:42.779 { 00:20:42.779 "method": "keyring_file_add_key", 00:20:42.779 "params": { 00:20:42.779 "name": "key0", 00:20:42.779 "path": "/tmp/tmp.urbfOxNyDE" 00:20:42.779 } 00:20:42.779 } 00:20:42.779 ] 00:20:42.779 }, 00:20:42.779 { 00:20:42.779 "subsystem": "iobuf", 00:20:42.779 "config": [ 00:20:42.779 { 00:20:42.779 "method": "iobuf_set_options", 00:20:42.779 "params": { 00:20:42.779 "small_pool_count": 8192, 00:20:42.779 "large_pool_count": 1024, 00:20:42.779 "small_bufsize": 8192, 00:20:42.779 "large_bufsize": 135168 00:20:42.779 } 00:20:42.779 } 00:20:42.779 ] 00:20:42.779 }, 00:20:42.779 { 00:20:42.779 "subsystem": "sock", 00:20:42.779 "config": [ 00:20:42.779 { 00:20:42.779 "method": "sock_impl_set_options", 00:20:42.779 "params": { 00:20:42.779 "impl_name": "posix", 00:20:42.779 "recv_buf_size": 2097152, 00:20:42.779 "send_buf_size": 2097152, 00:20:42.779 "enable_recv_pipe": true, 00:20:42.779 "enable_quickack": false, 00:20:42.779 "enable_placement_id": 0, 00:20:42.779 "enable_zerocopy_send_server": true, 00:20:42.779 "enable_zerocopy_send_client": false, 00:20:42.779 "zerocopy_threshold": 0, 00:20:42.779 "tls_version": 0, 00:20:42.779 "enable_ktls": false 00:20:42.779 } 00:20:42.779 }, 00:20:42.779 { 00:20:42.779 "method": "sock_impl_set_options", 00:20:42.779 "params": { 00:20:42.779 "impl_name": "ssl", 00:20:42.779 "recv_buf_size": 4096, 00:20:42.779 "send_buf_size": 4096, 00:20:42.779 "enable_recv_pipe": true, 00:20:42.779 "enable_quickack": false, 00:20:42.779 "enable_placement_id": 0, 00:20:42.779 "enable_zerocopy_send_server": true, 00:20:42.779 "enable_zerocopy_send_client": false, 00:20:42.779 "zerocopy_threshold": 0, 00:20:42.779 "tls_version": 0, 00:20:42.779 "enable_ktls": false 00:20:42.779 } 00:20:42.779 } 00:20:42.779 ] 00:20:42.779 }, 00:20:42.779 { 00:20:42.779 "subsystem": "vmd", 00:20:42.779 "config": [] 00:20:42.779 }, 00:20:42.779 { 00:20:42.779 "subsystem": "accel", 00:20:42.779 "config": [ 00:20:42.779 { 00:20:42.779 "method": "accel_set_options", 00:20:42.779 "params": { 00:20:42.779 "small_cache_size": 128, 00:20:42.779 "large_cache_size": 16, 00:20:42.779 "task_count": 2048, 00:20:42.779 "sequence_count": 2048, 00:20:42.779 "buf_count": 2048 00:20:42.779 } 00:20:42.779 } 00:20:42.779 ] 00:20:42.779 }, 00:20:42.779 { 00:20:42.779 "subsystem": "bdev", 00:20:42.779 "config": [ 00:20:42.779 { 00:20:42.779 "method": "bdev_set_options", 00:20:42.779 "params": { 00:20:42.779 "bdev_io_pool_size": 65535, 00:20:42.779 "bdev_io_cache_size": 256, 00:20:42.779 "bdev_auto_examine": true, 00:20:42.779 "iobuf_small_cache_size": 128, 00:20:42.779 "iobuf_large_cache_size": 16 00:20:42.779 } 00:20:42.779 }, 00:20:42.779 { 00:20:42.779 "method": "bdev_raid_set_options", 00:20:42.779 "params": { 00:20:42.779 "process_window_size_kb": 1024 00:20:42.779 } 00:20:42.779 }, 00:20:42.779 { 00:20:42.779 "method": "bdev_iscsi_set_options", 00:20:42.779 "params": { 00:20:42.779 "timeout_sec": 30 00:20:42.779 } 00:20:42.779 }, 00:20:42.779 { 00:20:42.779 "method": "bdev_nvme_set_options", 00:20:42.779 "params": { 00:20:42.779 "action_on_timeout": "none", 00:20:42.779 "timeout_us": 0, 00:20:42.779 "timeout_admin_us": 0, 00:20:42.779 "keep_alive_timeout_ms": 10000, 00:20:42.779 "arbitration_burst": 0, 00:20:42.779 "low_priority_weight": 0, 00:20:42.779 "medium_priority_weight": 0, 00:20:42.779 "high_priority_weight": 0, 00:20:42.779 "nvme_adminq_poll_period_us": 10000, 00:20:42.779 "nvme_ioq_poll_period_us": 0, 00:20:42.779 "io_queue_requests": 512, 00:20:42.779 "delay_cmd_submit": true, 00:20:42.779 "transport_retry_count": 4, 00:20:42.779 "bdev_retry_count": 3, 00:20:42.779 "transport_ack_timeout": 0, 00:20:42.779 "ctrlr_loss_timeout_sec": 0, 00:20:42.779 "reconnect_delay_sec": 0, 00:20:42.780 "fast_io_fail_timeout_sec": 0, 00:20:42.780 "disable_auto_failback": false, 00:20:42.780 "generate_uuids": false, 00:20:42.780 "transport_tos": 0, 00:20:42.780 "nvme_error_stat": false, 00:20:42.780 "rdma_srq_size": 0, 00:20:42.780 "io_path_stat": false, 00:20:42.780 "allow_accel_sequence": false, 00:20:42.780 "rdma_max_cq_size": 0, 00:20:42.780 "rdma_cm_event_timeout_ms": 0, 00:20:42.780 "dhchap_digests": [ 00:20:42.780 "sha256", 00:20:42.780 "sha384", 00:20:42.780 "sha512" 00:20:42.780 ], 00:20:42.780 "dhchap_dhgroups": [ 00:20:42.780 "null", 00:20:42.780 "ffdhe2048", 00:20:42.780 "ffdhe3072", 00:20:42.780 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.780 "ffdhe4096", 00:20:42.780 "ffdhe6144", 00:20:42.780 "ffdhe8192" 00:20:42.780 ] 00:20:42.780 } 00:20:42.780 }, 00:20:42.780 { 00:20:42.780 "method": "bdev_nvme_attach_controller", 00:20:42.780 "params": { 00:20:42.780 "name": "nvme0", 00:20:42.780 "trtype": "TCP", 00:20:42.780 "adrfam": "IPv4", 00:20:42.780 "traddr": "10.0.0.2", 00:20:42.780 "trsvcid": "4420", 00:20:42.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.780 "prchk_reftag": false, 00:20:42.780 "prchk_guard": false, 00:20:42.780 "ctrlr_loss_timeout_sec": 0, 00:20:42.780 "reconnect_delay_sec": 0, 00:20:42.780 "fast_io_fail_timeout_sec": 0, 00:20:42.780 "psk": "key0", 00:20:42.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.780 "hdgst": false, 00:20:42.780 "ddgst": false 00:20:42.780 } 00:20:42.780 }, 00:20:42.780 { 00:20:42.780 "method": "bdev_nvme_set_hotplug", 00:20:42.780 "params": { 00:20:42.780 "period_us": 100000, 00:20:42.780 "enable": false 00:20:42.780 } 00:20:42.780 }, 00:20:42.780 { 00:20:42.780 "method": "bdev_enable_histogram", 00:20:42.780 "params": { 00:20:42.780 "name": "nvme0n1", 00:20:42.780 "enable": true 00:20:42.780 } 00:20:42.780 }, 00:20:42.780 { 00:20:42.780 "method": "bdev_wait_for_examine" 00:20:42.780 } 00:20:42.780 ] 00:20:42.780 }, 00:20:42.780 { 00:20:42.780 "subsystem": "nbd", 00:20:42.780 "config": [] 00:20:42.780 } 00:20:42.780 ] 00:20:42.780 }' 00:20:42.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.780 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:42.780 03:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.780 [2024-05-13 03:01:33.457899] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:42.780 [2024-05-13 03:01:33.457977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374674 ] 00:20:42.780 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.780 [2024-05-13 03:01:33.488689] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:42.780 [2024-05-13 03:01:33.519962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.038 [2024-05-13 03:01:33.611232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.038 [2024-05-13 03:01:33.782307] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.970 03:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:43.970 03:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:43.970 03:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:43.970 03:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:43.970 03:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.970 03:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:43.970 Running I/O for 1 seconds... 00:20:45.341 00:20:45.341 Latency(us) 00:20:45.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.341 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:45.341 Verification LBA range: start 0x0 length 0x2000 00:20:45.341 nvme0n1 : 1.11 970.53 3.79 0.00 0.00 127174.15 11019.76 177869.56 00:20:45.341 =================================================================================================================== 00:20:45.341 Total : 970.53 3.79 0.00 0.00 127174.15 11019.76 177869.56 00:20:45.341 0 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:45.341 nvmf_trace.0 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 374674 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 374674 ']' 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 374674 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 374674 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 374674' 00:20:45.341 killing process with pid 374674 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 374674 00:20:45.341 Received shutdown signal, test time was about 1.000000 seconds 00:20:45.341 00:20:45.341 Latency(us) 00:20:45.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.341 =================================================================================================================== 00:20:45.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.341 03:01:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 374674 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.600 rmmod nvme_tcp 00:20:45.600 rmmod nvme_fabrics 00:20:45.600 rmmod nvme_keyring 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 374514 ']' 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 374514 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 374514 ']' 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 374514 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 374514 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 374514' 00:20:45.600 killing process with pid 374514 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 374514 00:20:45.600 [2024-05-13 03:01:36.278770] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:45.600 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 374514 00:20:45.858 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.858 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:45.858 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:45.858 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.858 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:45.858 03:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.858 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.858 03:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.765 03:01:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:47.765 03:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.O3N3s9J468 /tmp/tmp.jzpu8fuOsx /tmp/tmp.urbfOxNyDE 00:20:47.765 00:20:47.765 real 1m19.506s 00:20:47.765 user 1m58.129s 00:20:47.765 sys 0m28.294s 00:20:47.765 03:01:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:47.765 03:01:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.765 ************************************ 00:20:47.765 END TEST nvmf_tls 00:20:47.765 ************************************ 00:20:48.024 03:01:38 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:48.024 03:01:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:48.024 03:01:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:48.024 03:01:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:48.024 ************************************ 00:20:48.024 START TEST nvmf_fips 00:20:48.024 ************************************ 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:48.024 * Looking for test storage... 00:20:48.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.024 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:48.025 Error setting digest 00:20:48.025 0052F471127F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:48.025 0052F471127F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:48.025 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:48.026 03:01:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.927 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:50.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:50.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.188 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:50.189 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:50.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:50.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:20:50.189 00:20:50.189 --- 10.0.0.2 ping statistics --- 00:20:50.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.189 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:20:50.189 00:20:50.189 --- 10.0.0.1 ping statistics --- 00:20:50.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.189 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=377023 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 377023 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 377023 ']' 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:50.189 03:01:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.189 [2024-05-13 03:01:40.977517] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:50.189 [2024-05-13 03:01:40.977616] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.449 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.449 [2024-05-13 03:01:41.015198] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:50.449 [2024-05-13 03:01:41.047309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.449 [2024-05-13 03:01:41.136406] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.449 [2024-05-13 03:01:41.136455] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.449 [2024-05-13 03:01:41.136483] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.449 [2024-05-13 03:01:41.136494] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.449 [2024-05-13 03:01:41.136504] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.449 [2024-05-13 03:01:41.136530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.383 03:01:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.642 [2024-05-13 03:01:42.238352] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.642 [2024-05-13 03:01:42.254277] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:51.642 [2024-05-13 03:01:42.254360] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.642 [2024-05-13 03:01:42.254581] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.642 [2024-05-13 03:01:42.286817] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:51.642 malloc0 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=377180 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 377180 /var/tmp/bdevperf.sock 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 377180 ']' 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:51.642 03:01:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:51.642 [2024-05-13 03:01:42.376468] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:20:51.642 [2024-05-13 03:01:42.376554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377180 ] 00:20:51.642 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.642 [2024-05-13 03:01:42.407566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:51.642 [2024-05-13 03:01:42.434039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.901 [2024-05-13 03:01:42.518965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.901 03:01:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:51.901 03:01:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:51.901 03:01:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:52.159 [2024-05-13 03:01:42.899030] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.159 [2024-05-13 03:01:42.899156] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:52.418 TLSTESTn1 00:20:52.418 03:01:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:52.418 Running I/O for 10 seconds... 00:21:04.612 00:21:04.612 Latency(us) 00:21:04.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.612 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:04.612 Verification LBA range: start 0x0 length 0x2000 00:21:04.612 TLSTESTn1 : 10.11 906.51 3.54 0.00 0.00 140586.51 10145.94 173209.22 00:21:04.612 =================================================================================================================== 00:21:04.612 Total : 906.51 3.54 0.00 0.00 140586.51 10145.94 173209.22 00:21:04.612 0 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:04.612 nvmf_trace.0 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 377180 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 377180 ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 377180 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 377180 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 377180' 00:21:04.612 killing process with pid 377180 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 377180 00:21:04.612 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.612 00:21:04.612 Latency(us) 00:21:04.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.612 =================================================================================================================== 00:21:04.612 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.612 [2024-05-13 03:01:53.352839] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 377180 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.612 rmmod nvme_tcp 00:21:04.612 rmmod nvme_fabrics 00:21:04.612 rmmod nvme_keyring 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 377023 ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 377023 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 377023 ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 377023 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 377023 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 377023' 00:21:04.612 killing process with pid 377023 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 377023 00:21:04.612 [2024-05-13 03:01:53.659260] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:04.612 [2024-05-13 03:01:53.659315] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 377023 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.612 03:01:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.180 03:01:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:05.180 03:01:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.180 00:21:05.180 real 0m17.351s 00:21:05.180 user 0m21.910s 00:21:05.180 sys 0m6.185s 00:21:05.180 03:01:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:05.180 03:01:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:05.180 ************************************ 00:21:05.180 END TEST nvmf_fips 00:21:05.180 ************************************ 00:21:05.438 03:01:55 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:21:05.438 03:01:55 nvmf_tcp -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:05.438 03:01:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:05.438 03:01:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:05.438 03:01:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:05.438 ************************************ 00:21:05.438 START TEST nvmf_fuzz 00:21:05.438 ************************************ 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:05.438 * Looking for test storage... 00:21:05.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.438 03:01:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:07.340 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:07.340 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.340 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.628 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.628 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.628 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.628 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.628 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.628 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:07.629 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:07.629 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:21:07.629 00:21:07.629 --- 10.0.0.2 ping statistics --- 00:21:07.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.629 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:21:07.629 00:21:07.629 --- 10.0.0.1 ping statistics --- 00:21:07.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.629 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=380430 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 380430 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 380430 ']' 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:07.629 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.888 Malloc0 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.888 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:08.147 03:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.147 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:08.147 03:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:40.228 Fuzzing completed. Shutting down the fuzz application 00:21:40.228 00:21:40.228 Dumping successful admin opcodes: 00:21:40.228 8, 9, 10, 24, 00:21:40.228 Dumping successful io opcodes: 00:21:40.228 0, 9, 00:21:40.228 NS: 0x200003aeff00 I/O qp, Total commands completed: 443832, total successful commands: 2581, random_seed: 3780681792 00:21:40.228 NS: 0x200003aeff00 admin qp, Total commands completed: 55616, total successful commands: 443, random_seed: 1950134784 00:21:40.228 03:02:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:40.228 Fuzzing completed. Shutting down the fuzz application 00:21:40.228 00:21:40.228 Dumping successful admin opcodes: 00:21:40.228 24, 00:21:40.228 Dumping successful io opcodes: 00:21:40.228 00:21:40.228 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2058437048 00:21:40.228 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2058578498 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.228 rmmod nvme_tcp 00:21:40.228 rmmod nvme_fabrics 00:21:40.228 rmmod nvme_keyring 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 380430 ']' 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 380430 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 380430 ']' 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 380430 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 380430 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:40.228 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 380430' 00:21:40.228 killing process with pid 380430 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 380430 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 380430 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.229 03:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.767 03:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.767 03:02:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:42.767 00:21:42.767 real 0m36.950s 00:21:42.767 user 0m50.527s 00:21:42.767 sys 0m15.490s 00:21:42.767 03:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:42.768 03:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:42.768 ************************************ 00:21:42.768 END TEST nvmf_fuzz 00:21:42.768 ************************************ 00:21:42.768 03:02:32 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:42.768 03:02:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:42.768 03:02:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:42.768 03:02:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:42.768 ************************************ 00:21:42.768 START TEST nvmf_multiconnection 00:21:42.768 ************************************ 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:42.768 * Looking for test storage... 00:21:42.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.768 03:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.671 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.672 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.672 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.672 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.672 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.672 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.672 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.672 03:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:44.672 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:44.672 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:44.672 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:44.672 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:21:44.672 00:21:44.672 --- 10.0.0.2 ping statistics --- 00:21:44.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.672 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:21:44.672 00:21:44.672 --- 10.0.0.1 ping statistics --- 00:21:44.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.672 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:44.672 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=386155 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 386155 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 386155 ']' 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:44.673 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.673 [2024-05-13 03:02:35.220534] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:21:44.673 [2024-05-13 03:02:35.220625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.673 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.673 [2024-05-13 03:02:35.258088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:44.673 [2024-05-13 03:02:35.285150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.673 [2024-05-13 03:02:35.371533] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.673 [2024-05-13 03:02:35.371597] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.673 [2024-05-13 03:02:35.371620] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.673 [2024-05-13 03:02:35.371631] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.673 [2024-05-13 03:02:35.371642] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.673 [2024-05-13 03:02:35.371723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.673 [2024-05-13 03:02:35.371790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.673 [2024-05-13 03:02:35.371856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.673 [2024-05-13 03:02:35.371854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.931 [2024-05-13 03:02:35.529555] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.931 Malloc1 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.931 [2024-05-13 03:02:35.586298] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:44.931 [2024-05-13 03:02:35.586603] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:44.931 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 Malloc2 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 Malloc3 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 Malloc4 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.932 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 Malloc5 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 Malloc6 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 Malloc7 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 Malloc8 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.191 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 Malloc9 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.192 03:02:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.450 Malloc10 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.450 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.451 Malloc11 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.451 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:46.016 03:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:46.016 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:46.016 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:46.016 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:46.016 03:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:47.915 03:02:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:47.915 03:02:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:47.915 03:02:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:21:47.915 03:02:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:47.915 03:02:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:47.915 03:02:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:47.915 03:02:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.915 03:02:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:48.850 03:02:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:48.850 03:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:48.850 03:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:48.850 03:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:48.850 03:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:50.749 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:50.749 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:50.749 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:21:50.749 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:50.749 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:50.749 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:50.749 03:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:50.749 03:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:51.337 03:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:51.337 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:51.337 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:51.337 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:51.337 03:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:53.245 03:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:53.245 03:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:53.245 03:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:21:53.245 03:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:53.245 03:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:53.245 03:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:53.245 03:02:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.245 03:02:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:54.177 03:02:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:54.177 03:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:54.177 03:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:54.177 03:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:54.177 03:02:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:56.076 03:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:56.076 03:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:56.076 03:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:21:56.076 03:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:56.076 03:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:56.076 03:02:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:56.076 03:02:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.076 03:02:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:56.642 03:02:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:56.642 03:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:56.642 03:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:56.642 03:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:56.642 03:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:59.170 03:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:59.170 03:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:59.170 03:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:21:59.170 03:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:59.170 03:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:59.170 03:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:59.170 03:02:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.170 03:02:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:59.428 03:02:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:59.428 03:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:59.428 03:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.428 03:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:59.428 03:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:01.329 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:01.329 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:01.329 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:22:01.329 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:01.329 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:01.329 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:01.329 03:02:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.330 03:02:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:02.265 03:02:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:02.265 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:02.265 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:02.266 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:02.266 03:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:04.170 03:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:04.170 03:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:04.170 03:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:22:04.170 03:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:04.170 03:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:04.170 03:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:04.170 03:02:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.170 03:02:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:05.109 03:02:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:05.109 03:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:05.109 03:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:05.109 03:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:05.109 03:02:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:07.011 03:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:07.011 03:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:07.011 03:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:22:07.011 03:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:07.011 03:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:07.011 03:02:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:07.011 03:02:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:07.011 03:02:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:07.947 03:02:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:07.947 03:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:07.947 03:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.947 03:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:07.947 03:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:09.880 03:03:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:09.880 03:03:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:09.880 03:03:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:22:09.880 03:03:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:09.880 03:03:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:09.880 03:03:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:09.880 03:03:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:09.880 03:03:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:10.447 03:03:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:10.447 03:03:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:10.447 03:03:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:10.447 03:03:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:10.447 03:03:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:12.982 03:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:12.982 03:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:12.982 03:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:22:12.982 03:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:12.982 03:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:12.982 03:03:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:12.982 03:03:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.982 03:03:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:13.549 03:03:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:13.549 03:03:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:13.549 03:03:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.549 03:03:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:13.549 03:03:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:15.452 03:03:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:15.452 03:03:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:15.452 03:03:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:22:15.452 03:03:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:15.452 03:03:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.452 03:03:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:15.452 03:03:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:15.452 [global] 00:22:15.452 thread=1 00:22:15.452 invalidate=1 00:22:15.452 rw=read 00:22:15.452 time_based=1 00:22:15.452 runtime=10 00:22:15.452 ioengine=libaio 00:22:15.452 direct=1 00:22:15.452 bs=262144 00:22:15.452 iodepth=64 00:22:15.452 norandommap=1 00:22:15.452 numjobs=1 00:22:15.452 00:22:15.452 [job0] 00:22:15.452 filename=/dev/nvme0n1 00:22:15.452 [job1] 00:22:15.452 filename=/dev/nvme10n1 00:22:15.452 [job2] 00:22:15.452 filename=/dev/nvme1n1 00:22:15.452 [job3] 00:22:15.452 filename=/dev/nvme2n1 00:22:15.452 [job4] 00:22:15.452 filename=/dev/nvme3n1 00:22:15.452 [job5] 00:22:15.452 filename=/dev/nvme4n1 00:22:15.452 [job6] 00:22:15.452 filename=/dev/nvme5n1 00:22:15.452 [job7] 00:22:15.452 filename=/dev/nvme6n1 00:22:15.452 [job8] 00:22:15.452 filename=/dev/nvme7n1 00:22:15.452 [job9] 00:22:15.452 filename=/dev/nvme8n1 00:22:15.452 [job10] 00:22:15.452 filename=/dev/nvme9n1 00:22:15.452 Could not set queue depth (nvme0n1) 00:22:15.452 Could not set queue depth (nvme10n1) 00:22:15.452 Could not set queue depth (nvme1n1) 00:22:15.452 Could not set queue depth (nvme2n1) 00:22:15.452 Could not set queue depth (nvme3n1) 00:22:15.452 Could not set queue depth (nvme4n1) 00:22:15.452 Could not set queue depth (nvme5n1) 00:22:15.452 Could not set queue depth (nvme6n1) 00:22:15.452 Could not set queue depth (nvme7n1) 00:22:15.452 Could not set queue depth (nvme8n1) 00:22:15.452 Could not set queue depth (nvme9n1) 00:22:15.710 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:15.710 fio-3.35 00:22:15.710 Starting 11 threads 00:22:27.915 00:22:27.915 job0: (groupid=0, jobs=1): err= 0: pid=390407: Mon May 13 03:03:16 2024 00:22:27.915 read: IOPS=620, BW=155MiB/s (163MB/s)(1571MiB/10123msec) 00:22:27.915 slat (usec): min=14, max=230464, avg=1503.58, stdev=7926.19 00:22:27.915 clat (msec): min=34, max=802, avg=101.53, stdev=121.14 00:22:27.915 lat (msec): min=35, max=822, avg=103.03, stdev=122.72 00:22:27.915 clat percentiles (msec): 00:22:27.915 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 44], 00:22:27.915 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 58], 00:22:27.915 | 70.00th=[ 79], 80.00th=[ 116], 90.00th=[ 207], 95.00th=[ 401], 00:22:27.915 | 99.00th=[ 676], 99.50th=[ 760], 99.90th=[ 793], 99.95th=[ 793], 00:22:27.915 | 99.99th=[ 802] 00:22:27.915 bw ( KiB/s): min=11776, max=366080, per=11.44%, avg=159194.45, stdev=127997.62, samples=20 00:22:27.915 iops : min= 46, max= 1430, avg=621.85, stdev=499.99, samples=20 00:22:27.915 lat (msec) : 50=44.18%, 100=32.10%, 250=14.88%, 500=6.75%, 750=1.46% 00:22:27.915 lat (msec) : 1000=0.62% 00:22:27.915 cpu : usr=0.38%, sys=2.27%, ctx=1254, majf=0, minf=4097 00:22:27.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:27.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.915 issued rwts: total=6283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.915 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.915 job1: (groupid=0, jobs=1): err= 0: pid=390408: Mon May 13 03:03:16 2024 00:22:27.915 read: IOPS=490, BW=123MiB/s (129MB/s)(1236MiB/10071msec) 00:22:27.915 slat (usec): min=10, max=183443, avg=1661.24, stdev=8278.78 00:22:27.915 clat (msec): min=3, max=984, avg=128.68, stdev=164.02 00:22:27.915 lat (msec): min=3, max=1072, avg=130.34, stdev=166.45 00:22:27.915 clat percentiles (msec): 00:22:27.915 | 1.00th=[ 10], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 41], 00:22:27.915 | 30.00th=[ 57], 40.00th=[ 77], 50.00th=[ 91], 60.00th=[ 108], 00:22:27.915 | 70.00th=[ 125], 80.00th=[ 155], 90.00th=[ 211], 95.00th=[ 275], 00:22:27.915 | 99.00th=[ 902], 99.50th=[ 936], 99.90th=[ 978], 99.95th=[ 986], 00:22:27.915 | 99.99th=[ 986] 00:22:27.915 bw ( KiB/s): min=11776, max=277504, per=8.97%, avg=124913.30, stdev=83356.01, samples=20 00:22:27.915 iops : min= 46, max= 1084, avg=487.90, stdev=325.59, samples=20 00:22:27.915 lat (msec) : 4=0.06%, 10=1.07%, 20=3.95%, 50=22.54%, 100=28.75% 00:22:27.915 lat (msec) : 250=36.65%, 500=2.55%, 750=0.38%, 1000=4.05% 00:22:27.915 cpu : usr=0.31%, sys=1.51%, ctx=1351, majf=0, minf=4097 00:22:27.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:27.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.915 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.915 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.915 job2: (groupid=0, jobs=1): err= 0: pid=390410: Mon May 13 03:03:16 2024 00:22:27.915 read: IOPS=538, BW=135MiB/s (141MB/s)(1358MiB/10094msec) 00:22:27.915 slat (usec): min=9, max=401986, avg=1350.63, stdev=8409.49 00:22:27.915 clat (msec): min=8, max=812, avg=117.48, stdev=103.91 00:22:27.915 lat (msec): min=8, max=812, avg=118.84, stdev=104.21 00:22:27.915 clat percentiles (msec): 00:22:27.915 | 1.00th=[ 21], 5.00th=[ 42], 10.00th=[ 53], 20.00th=[ 67], 00:22:27.915 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 97], 60.00th=[ 105], 00:22:27.915 | 70.00th=[ 111], 80.00th=[ 129], 90.00th=[ 171], 95.00th=[ 257], 00:22:27.915 | 99.00th=[ 768], 99.50th=[ 785], 99.90th=[ 810], 99.95th=[ 810], 00:22:27.915 | 99.99th=[ 810] 00:22:27.915 bw ( KiB/s): min=28672, max=273920, per=9.87%, avg=137452.30, stdev=60326.53, samples=20 00:22:27.915 iops : min= 112, max= 1070, avg=536.85, stdev=235.71, samples=20 00:22:27.915 lat (msec) : 10=0.04%, 20=0.88%, 50=7.05%, 100=45.72%, 250=40.88% 00:22:27.915 lat (msec) : 500=3.57%, 750=0.72%, 1000=1.14% 00:22:27.915 cpu : usr=0.39%, sys=1.85%, ctx=1281, majf=0, minf=4097 00:22:27.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.916 issued rwts: total=5433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.916 job3: (groupid=0, jobs=1): err= 0: pid=390416: Mon May 13 03:03:16 2024 00:22:27.916 read: IOPS=324, BW=81.2MiB/s (85.2MB/s)(823MiB/10125msec) 00:22:27.916 slat (usec): min=10, max=724816, avg=1615.03, stdev=15971.95 00:22:27.916 clat (usec): min=1506, max=1087.6k, avg=195199.23, stdev=200398.62 00:22:27.916 lat (usec): min=1576, max=1087.6k, avg=196814.26, stdev=201431.21 00:22:27.916 clat percentiles (msec): 00:22:27.916 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 37], 20.00th=[ 72], 00:22:27.916 | 30.00th=[ 94], 40.00th=[ 105], 50.00th=[ 117], 60.00th=[ 148], 00:22:27.916 | 70.00th=[ 199], 80.00th=[ 262], 90.00th=[ 447], 95.00th=[ 718], 00:22:27.916 | 99.00th=[ 936], 99.50th=[ 1062], 99.90th=[ 1070], 99.95th=[ 1083], 00:22:27.916 | 99.99th=[ 1083] 00:22:27.916 bw ( KiB/s): min=19968, max=173056, per=6.25%, avg=86947.42, stdev=45088.76, samples=19 00:22:27.916 iops : min= 78, max= 676, avg=339.63, stdev=176.12, samples=19 00:22:27.916 lat (msec) : 2=0.09%, 4=0.09%, 10=1.91%, 20=3.28%, 50=8.84% 00:22:27.916 lat (msec) : 100=20.91%, 250=43.50%, 500=13.71%, 750=3.43%, 1000=3.43% 00:22:27.916 lat (msec) : 2000=0.79% 00:22:27.916 cpu : usr=0.22%, sys=0.85%, ctx=1053, majf=0, minf=4097 00:22:27.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:22:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.916 issued rwts: total=3290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.916 job4: (groupid=0, jobs=1): err= 0: pid=390417: Mon May 13 03:03:16 2024 00:22:27.916 read: IOPS=544, BW=136MiB/s (143MB/s)(1377MiB/10117msec) 00:22:27.916 slat (usec): min=10, max=179762, avg=1241.24, stdev=5867.19 00:22:27.916 clat (usec): min=1321, max=614296, avg=116237.55, stdev=86506.14 00:22:27.916 lat (usec): min=1349, max=614315, avg=117478.79, stdev=86926.75 00:22:27.916 clat percentiles (msec): 00:22:27.916 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 68], 00:22:27.916 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 101], 60.00th=[ 107], 00:22:27.916 | 70.00th=[ 116], 80.00th=[ 146], 90.00th=[ 220], 95.00th=[ 275], 00:22:27.916 | 99.00th=[ 435], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:22:27.916 | 99.99th=[ 617] 00:22:27.916 bw ( KiB/s): min=32256, max=243712, per=10.01%, avg=139355.25, stdev=48073.30, samples=20 00:22:27.916 iops : min= 126, max= 952, avg=544.35, stdev=187.79, samples=20 00:22:27.916 lat (msec) : 2=0.09%, 4=0.09%, 10=1.80%, 20=7.59%, 50=6.32% 00:22:27.916 lat (msec) : 100=33.56%, 250=43.42%, 500=6.52%, 750=0.62% 00:22:27.916 cpu : usr=0.26%, sys=1.87%, ctx=1524, majf=0, minf=4097 00:22:27.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.916 issued rwts: total=5507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.916 job5: (groupid=0, jobs=1): err= 0: pid=390418: Mon May 13 03:03:16 2024 00:22:27.916 read: IOPS=242, BW=60.6MiB/s (63.5MB/s)(613MiB/10120msec) 00:22:27.916 slat (usec): min=9, max=660209, avg=3500.39, stdev=20011.51 00:22:27.916 clat (msec): min=14, max=1048, avg=260.35, stdev=236.11 00:22:27.916 lat (msec): min=14, max=1053, avg=263.85, stdev=239.47 00:22:27.916 clat percentiles (msec): 00:22:27.916 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 53], 20.00th=[ 118], 00:22:27.916 | 30.00th=[ 132], 40.00th=[ 144], 50.00th=[ 169], 60.00th=[ 199], 00:22:27.916 | 70.00th=[ 264], 80.00th=[ 372], 90.00th=[ 768], 95.00th=[ 827], 00:22:27.916 | 99.00th=[ 927], 99.50th=[ 969], 99.90th=[ 1028], 99.95th=[ 1028], 00:22:27.916 | 99.99th=[ 1053] 00:22:27.916 bw ( KiB/s): min=11264, max=129277, per=4.63%, avg=64390.58, stdev=40441.28, samples=19 00:22:27.916 iops : min= 44, max= 504, avg=251.47, stdev=157.89, samples=19 00:22:27.916 lat (msec) : 20=0.90%, 50=8.40%, 100=7.66%, 250=51.00%, 500=17.57% 00:22:27.916 lat (msec) : 750=3.79%, 1000=10.52%, 2000=0.16% 00:22:27.916 cpu : usr=0.15%, sys=0.93%, ctx=712, majf=0, minf=4097 00:22:27.916 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:22:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.916 issued rwts: total=2453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.916 job6: (groupid=0, jobs=1): err= 0: pid=390419: Mon May 13 03:03:16 2024 00:22:27.916 read: IOPS=1002, BW=251MiB/s (263MB/s)(2512MiB/10021msec) 00:22:27.916 slat (usec): min=9, max=123327, avg=826.33, stdev=3040.72 00:22:27.916 clat (usec): min=1294, max=259211, avg=62976.45, stdev=40221.12 00:22:27.916 lat (usec): min=1319, max=310168, avg=63802.77, stdev=40596.17 00:22:27.916 clat percentiles (msec): 00:22:27.916 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 41], 00:22:27.916 | 30.00th=[ 43], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 53], 00:22:27.916 | 70.00th=[ 59], 80.00th=[ 74], 90.00th=[ 124], 95.00th=[ 157], 00:22:27.916 | 99.00th=[ 224], 99.50th=[ 236], 99.90th=[ 249], 99.95th=[ 249], 00:22:27.916 | 99.99th=[ 249] 00:22:27.916 bw ( KiB/s): min=126976, max=397824, per=18.36%, avg=255551.45, stdev=98212.53, samples=20 00:22:27.916 iops : min= 496, max= 1554, avg=998.20, stdev=383.71, samples=20 00:22:27.916 lat (msec) : 2=0.04%, 4=0.15%, 10=0.79%, 20=1.33%, 50=52.63% 00:22:27.916 lat (msec) : 100=31.05%, 250=14.01%, 500=0.01% 00:22:27.916 cpu : usr=0.60%, sys=3.15%, ctx=2268, majf=0, minf=4097 00:22:27.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.916 issued rwts: total=10046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.916 job7: (groupid=0, jobs=1): err= 0: pid=390420: Mon May 13 03:03:16 2024 00:22:27.916 read: IOPS=589, BW=147MiB/s (155MB/s)(1488MiB/10092msec) 00:22:27.916 slat (usec): min=14, max=140519, avg=1596.03, stdev=4867.56 00:22:27.916 clat (msec): min=5, max=347, avg=106.84, stdev=36.15 00:22:27.916 lat (msec): min=5, max=374, avg=108.43, stdev=36.55 00:22:27.916 clat percentiles (msec): 00:22:27.916 | 1.00th=[ 31], 5.00th=[ 67], 10.00th=[ 74], 20.00th=[ 82], 00:22:27.916 | 30.00th=[ 88], 40.00th=[ 94], 50.00th=[ 102], 60.00th=[ 109], 00:22:27.916 | 70.00th=[ 115], 80.00th=[ 126], 90.00th=[ 150], 95.00th=[ 178], 00:22:27.916 | 99.00th=[ 226], 99.50th=[ 243], 99.90th=[ 338], 99.95th=[ 347], 00:22:27.916 | 99.99th=[ 347] 00:22:27.916 bw ( KiB/s): min=55296, max=207360, per=10.83%, avg=150720.30, stdev=36170.74, samples=20 00:22:27.916 iops : min= 216, max= 810, avg=588.75, stdev=141.29, samples=20 00:22:27.916 lat (msec) : 10=0.13%, 20=0.37%, 50=1.09%, 100=46.85%, 250=51.07% 00:22:27.916 lat (msec) : 500=0.49% 00:22:27.916 cpu : usr=0.41%, sys=2.08%, ctx=1318, majf=0, minf=4097 00:22:27.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.916 issued rwts: total=5951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.916 job8: (groupid=0, jobs=1): err= 0: pid=390421: Mon May 13 03:03:16 2024 00:22:27.916 read: IOPS=484, BW=121MiB/s (127MB/s)(1223MiB/10099msec) 00:22:27.916 slat (usec): min=9, max=423280, avg=1386.02, stdev=12709.58 00:22:27.916 clat (usec): min=1760, max=1238.8k, avg=130646.28, stdev=179241.24 00:22:27.916 lat (usec): min=1789, max=1238.9k, avg=132032.31, stdev=181597.02 00:22:27.916 clat percentiles (msec): 00:22:27.916 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 23], 20.00th=[ 41], 00:22:27.916 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 94], 00:22:27.916 | 70.00th=[ 113], 80.00th=[ 150], 90.00th=[ 239], 95.00th=[ 426], 00:22:27.916 | 99.00th=[ 1036], 99.50th=[ 1083], 99.90th=[ 1217], 99.95th=[ 1234], 00:22:27.916 | 99.99th=[ 1234] 00:22:27.916 bw ( KiB/s): min= 6144, max=336896, per=8.88%, avg=123571.95, stdev=93327.57, samples=20 00:22:27.916 iops : min= 24, max= 1316, avg=482.70, stdev=364.56, samples=20 00:22:27.916 lat (msec) : 2=0.04%, 4=0.29%, 10=1.53%, 20=6.03%, 50=16.46% 00:22:27.916 lat (msec) : 100=39.75%, 250=26.40%, 500=5.38%, 750=0.39%, 1000=2.68% 00:22:27.916 lat (msec) : 2000=1.06% 00:22:27.916 cpu : usr=0.17%, sys=1.41%, ctx=1473, majf=0, minf=4097 00:22:27.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.916 issued rwts: total=4891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.916 job9: (groupid=0, jobs=1): err= 0: pid=390422: Mon May 13 03:03:16 2024 00:22:27.916 read: IOPS=343, BW=85.8MiB/s (89.9MB/s)(860MiB/10026msec) 00:22:27.916 slat (usec): min=10, max=764448, avg=2139.36, stdev=19756.50 00:22:27.916 clat (msec): min=4, max=1561, avg=184.32, stdev=199.15 00:22:27.916 lat (msec): min=4, max=1561, avg=186.46, stdev=202.24 00:22:27.916 clat percentiles (msec): 00:22:27.916 | 1.00th=[ 17], 5.00th=[ 48], 10.00th=[ 66], 20.00th=[ 83], 00:22:27.916 | 30.00th=[ 100], 40.00th=[ 114], 50.00th=[ 127], 60.00th=[ 140], 00:22:27.916 | 70.00th=[ 155], 80.00th=[ 192], 90.00th=[ 372], 95.00th=[ 818], 00:22:27.916 | 99.00th=[ 986], 99.50th=[ 1028], 99.90th=[ 1036], 99.95th=[ 1036], 00:22:27.916 | 99.99th=[ 1569] 00:22:27.916 bw ( KiB/s): min= 1536, max=173056, per=6.53%, avg=90962.32, stdev=53889.87, samples=19 00:22:27.916 iops : min= 6, max= 676, avg=355.32, stdev=210.50, samples=19 00:22:27.916 lat (msec) : 10=0.29%, 20=0.76%, 50=4.89%, 100=24.92%, 250=55.54% 00:22:27.916 lat (msec) : 500=6.66%, 750=1.48%, 1000=4.65%, 2000=0.81% 00:22:27.916 cpu : usr=0.17%, sys=1.21%, ctx=949, majf=0, minf=4097 00:22:27.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:22:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.916 issued rwts: total=3439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.916 job10: (groupid=0, jobs=1): err= 0: pid=390423: Mon May 13 03:03:16 2024 00:22:27.916 read: IOPS=286, BW=71.7MiB/s (75.2MB/s)(727MiB/10141msec) 00:22:27.916 slat (usec): min=10, max=673163, avg=2068.09, stdev=22521.87 00:22:27.916 clat (msec): min=2, max=1481, avg=220.88, stdev=218.09 00:22:27.916 lat (msec): min=2, max=1481, avg=222.95, stdev=221.11 00:22:27.916 clat percentiles (msec): 00:22:27.916 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 38], 20.00th=[ 89], 00:22:27.916 | 30.00th=[ 115], 40.00th=[ 131], 50.00th=[ 146], 60.00th=[ 180], 00:22:27.916 | 70.00th=[ 209], 80.00th=[ 271], 90.00th=[ 592], 95.00th=[ 802], 00:22:27.916 | 99.00th=[ 877], 99.50th=[ 986], 99.90th=[ 986], 99.95th=[ 986], 00:22:27.916 | 99.99th=[ 1485] 00:22:27.916 bw ( KiB/s): min= 4608, max=152064, per=5.23%, avg=72825.05, stdev=46277.37, samples=20 00:22:27.916 iops : min= 18, max= 594, avg=284.45, stdev=180.77, samples=20 00:22:27.916 lat (msec) : 4=0.55%, 10=2.48%, 20=3.23%, 50=6.46%, 100=11.34% 00:22:27.916 lat (msec) : 250=53.39%, 500=11.62%, 750=3.23%, 1000=7.67%, 2000=0.03% 00:22:27.916 cpu : usr=0.14%, sys=0.81%, ctx=923, majf=0, minf=3721 00:22:27.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:22:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.916 issued rwts: total=2909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.916 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.916 00:22:27.916 Run status group 0 (all jobs): 00:22:27.916 READ: bw=1359MiB/s (1425MB/s), 60.6MiB/s-251MiB/s (63.5MB/s-263MB/s), io=13.5GiB (14.5GB), run=10021-10141msec 00:22:27.916 00:22:27.916 Disk stats (read/write): 00:22:27.916 nvme0n1: ios=12356/0, merge=0/0, ticks=1207359/0, in_queue=1207359, util=97.16% 00:22:27.917 nvme10n1: ios=9692/0, merge=0/0, ticks=1232998/0, in_queue=1232998, util=97.37% 00:22:27.917 nvme1n1: ios=10679/0, merge=0/0, ticks=1237966/0, in_queue=1237966, util=97.64% 00:22:27.917 nvme2n1: ios=6391/0, merge=0/0, ticks=1221246/0, in_queue=1221246, util=97.81% 00:22:27.917 nvme3n1: ios=10839/0, merge=0/0, ticks=1214560/0, in_queue=1214560, util=97.89% 00:22:27.917 nvme4n1: ios=4718/0, merge=0/0, ticks=1210245/0, in_queue=1210245, util=98.23% 00:22:27.917 nvme5n1: ios=19806/0, merge=0/0, ticks=1233662/0, in_queue=1233662, util=98.39% 00:22:27.917 nvme6n1: ios=11716/0, merge=0/0, ticks=1229228/0, in_queue=1229228, util=98.50% 00:22:27.917 nvme7n1: ios=9585/0, merge=0/0, ticks=1235022/0, in_queue=1235022, util=98.91% 00:22:27.917 nvme8n1: ios=6657/0, merge=0/0, ticks=1236975/0, in_queue=1236975, util=99.09% 00:22:27.917 nvme9n1: ios=5658/0, merge=0/0, ticks=1224047/0, in_queue=1224047, util=99.21% 00:22:27.917 03:03:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:27.917 [global] 00:22:27.917 thread=1 00:22:27.917 invalidate=1 00:22:27.917 rw=randwrite 00:22:27.917 time_based=1 00:22:27.917 runtime=10 00:22:27.917 ioengine=libaio 00:22:27.917 direct=1 00:22:27.917 bs=262144 00:22:27.917 iodepth=64 00:22:27.917 norandommap=1 00:22:27.917 numjobs=1 00:22:27.917 00:22:27.917 [job0] 00:22:27.917 filename=/dev/nvme0n1 00:22:27.917 [job1] 00:22:27.917 filename=/dev/nvme10n1 00:22:27.917 [job2] 00:22:27.917 filename=/dev/nvme1n1 00:22:27.917 [job3] 00:22:27.917 filename=/dev/nvme2n1 00:22:27.917 [job4] 00:22:27.917 filename=/dev/nvme3n1 00:22:27.917 [job5] 00:22:27.917 filename=/dev/nvme4n1 00:22:27.917 [job6] 00:22:27.917 filename=/dev/nvme5n1 00:22:27.917 [job7] 00:22:27.917 filename=/dev/nvme6n1 00:22:27.917 [job8] 00:22:27.917 filename=/dev/nvme7n1 00:22:27.917 [job9] 00:22:27.917 filename=/dev/nvme8n1 00:22:27.917 [job10] 00:22:27.917 filename=/dev/nvme9n1 00:22:27.917 Could not set queue depth (nvme0n1) 00:22:27.917 Could not set queue depth (nvme10n1) 00:22:27.917 Could not set queue depth (nvme1n1) 00:22:27.917 Could not set queue depth (nvme2n1) 00:22:27.917 Could not set queue depth (nvme3n1) 00:22:27.917 Could not set queue depth (nvme4n1) 00:22:27.917 Could not set queue depth (nvme5n1) 00:22:27.917 Could not set queue depth (nvme6n1) 00:22:27.917 Could not set queue depth (nvme7n1) 00:22:27.917 Could not set queue depth (nvme8n1) 00:22:27.917 Could not set queue depth (nvme9n1) 00:22:27.917 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:27.917 fio-3.35 00:22:27.917 Starting 11 threads 00:22:37.894 00:22:37.894 job0: (groupid=0, jobs=1): err= 0: pid=391801: Mon May 13 03:03:27 2024 00:22:37.894 write: IOPS=229, BW=57.3MiB/s (60.1MB/s)(586MiB/10224msec); 0 zone resets 00:22:37.894 slat (usec): min=17, max=1434.6k, avg=2938.86, stdev=32468.98 00:22:37.894 clat (msec): min=2, max=1600, avg=276.19, stdev=271.04 00:22:37.894 lat (msec): min=2, max=1600, avg=279.13, stdev=273.23 00:22:37.894 clat percentiles (msec): 00:22:37.894 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 65], 20.00th=[ 97], 00:22:37.894 | 30.00th=[ 113], 40.00th=[ 138], 50.00th=[ 176], 60.00th=[ 262], 00:22:37.894 | 70.00th=[ 384], 80.00th=[ 418], 90.00th=[ 506], 95.00th=[ 693], 00:22:37.894 | 99.00th=[ 1536], 99.50th=[ 1569], 99.90th=[ 1603], 99.95th=[ 1603], 00:22:37.894 | 99.99th=[ 1603] 00:22:37.894 bw ( KiB/s): min=14336, max=143872, per=8.26%, avg=64853.33, stdev=34377.42, samples=18 00:22:37.894 iops : min= 56, max= 562, avg=253.33, stdev=134.29, samples=18 00:22:37.894 lat (msec) : 4=0.13%, 10=1.41%, 20=1.49%, 50=3.76%, 100=14.04% 00:22:37.894 lat (msec) : 250=37.99%, 500=30.82%, 750=7.00%, 1000=0.68%, 2000=2.69% 00:22:37.894 cpu : usr=0.67%, sys=0.78%, ctx=1441, majf=0, minf=1 00:22:37.894 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:22:37.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.895 issued rwts: total=0,2343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.895 job1: (groupid=0, jobs=1): err= 0: pid=391802: Mon May 13 03:03:27 2024 00:22:37.895 write: IOPS=192, BW=48.1MiB/s (50.4MB/s)(487MiB/10132msec); 0 zone resets 00:22:37.895 slat (usec): min=26, max=684678, avg=4456.03, stdev=24488.56 00:22:37.895 clat (msec): min=11, max=1563, avg=328.07, stdev=287.25 00:22:37.895 lat (msec): min=11, max=1563, avg=332.53, stdev=290.22 00:22:37.895 clat percentiles (msec): 00:22:37.895 | 1.00th=[ 45], 5.00th=[ 105], 10.00th=[ 128], 20.00th=[ 153], 00:22:37.895 | 30.00th=[ 167], 40.00th=[ 180], 50.00th=[ 203], 60.00th=[ 243], 00:22:37.895 | 70.00th=[ 326], 80.00th=[ 481], 90.00th=[ 760], 95.00th=[ 869], 00:22:37.895 | 99.00th=[ 1385], 99.50th=[ 1519], 99.90th=[ 1569], 99.95th=[ 1569], 00:22:37.895 | 99.99th=[ 1569] 00:22:37.895 bw ( KiB/s): min=10240, max=104448, per=6.47%, avg=50803.95, stdev=31468.73, samples=19 00:22:37.895 iops : min= 40, max= 408, avg=198.42, stdev=122.90, samples=19 00:22:37.895 lat (msec) : 20=0.31%, 50=1.39%, 100=3.08%, 250=56.47%, 500=19.56% 00:22:37.895 lat (msec) : 750=8.68%, 1000=7.29%, 2000=3.23% 00:22:37.895 cpu : usr=0.60%, sys=0.66%, ctx=691, majf=0, minf=1 00:22:37.895 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:22:37.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.895 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.895 issued rwts: total=0,1948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.895 job2: (groupid=0, jobs=1): err= 0: pid=391803: Mon May 13 03:03:27 2024 00:22:37.895 write: IOPS=342, BW=85.7MiB/s (89.9MB/s)(868MiB/10129msec); 0 zone resets 00:22:37.895 slat (usec): min=25, max=137827, avg=2618.93, stdev=6781.28 00:22:37.895 clat (msec): min=10, max=606, avg=183.83, stdev=101.30 00:22:37.895 lat (msec): min=13, max=606, avg=186.45, stdev=102.73 00:22:37.895 clat percentiles (msec): 00:22:37.895 | 1.00th=[ 31], 5.00th=[ 88], 10.00th=[ 113], 20.00th=[ 124], 00:22:37.895 | 30.00th=[ 132], 40.00th=[ 140], 50.00th=[ 148], 60.00th=[ 167], 00:22:37.895 | 70.00th=[ 186], 80.00th=[ 224], 90.00th=[ 338], 95.00th=[ 439], 00:22:37.895 | 99.00th=[ 542], 99.50th=[ 567], 99.90th=[ 609], 99.95th=[ 609], 00:22:37.895 | 99.99th=[ 609] 00:22:37.895 bw ( KiB/s): min=28672, max=124416, per=11.11%, avg=87307.05, stdev=32803.87, samples=20 00:22:37.895 iops : min= 112, max= 486, avg=341.00, stdev=128.11, samples=20 00:22:37.895 lat (msec) : 20=0.29%, 50=2.56%, 100=2.88%, 250=78.98%, 500=13.56% 00:22:37.895 lat (msec) : 750=1.73% 00:22:37.895 cpu : usr=1.03%, sys=1.14%, ctx=1237, majf=0, minf=1 00:22:37.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:22:37.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.895 issued rwts: total=0,3473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.895 job3: (groupid=0, jobs=1): err= 0: pid=391804: Mon May 13 03:03:27 2024 00:22:37.895 write: IOPS=267, BW=66.9MiB/s (70.1MB/s)(684MiB/10225msec); 0 zone resets 00:22:37.895 slat (usec): min=25, max=66656, avg=3573.52, stdev=7204.09 00:22:37.895 clat (msec): min=18, max=578, avg=235.38, stdev=97.10 00:22:37.895 lat (msec): min=18, max=578, avg=238.95, stdev=98.20 00:22:37.895 clat percentiles (msec): 00:22:37.895 | 1.00th=[ 69], 5.00th=[ 130], 10.00th=[ 136], 20.00th=[ 144], 00:22:37.895 | 30.00th=[ 180], 40.00th=[ 194], 50.00th=[ 218], 60.00th=[ 241], 00:22:37.895 | 70.00th=[ 266], 80.00th=[ 305], 90.00th=[ 384], 95.00th=[ 426], 00:22:37.895 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 575], 99.95th=[ 575], 00:22:37.895 | 99.99th=[ 575] 00:22:37.895 bw ( KiB/s): min=30720, max=119296, per=8.71%, avg=68414.40, stdev=24212.81, samples=20 00:22:37.895 iops : min= 120, max= 466, avg=267.20, stdev=94.50, samples=20 00:22:37.895 lat (msec) : 20=0.15%, 50=0.44%, 100=0.88%, 250=63.14%, 500=32.94% 00:22:37.895 lat (msec) : 750=2.45% 00:22:37.895 cpu : usr=0.74%, sys=0.86%, ctx=771, majf=0, minf=1 00:22:37.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:22:37.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.895 issued rwts: total=0,2735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.895 job4: (groupid=0, jobs=1): err= 0: pid=391805: Mon May 13 03:03:27 2024 00:22:37.895 write: IOPS=246, BW=61.5MiB/s (64.5MB/s)(629MiB/10226msec); 0 zone resets 00:22:37.895 slat (usec): min=20, max=1331.5k, avg=2103.23, stdev=27281.32 00:22:37.895 clat (msec): min=3, max=1494, avg=257.60, stdev=251.63 00:22:37.895 lat (msec): min=3, max=1494, avg=259.70, stdev=253.18 00:22:37.895 clat percentiles (msec): 00:22:37.895 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 40], 20.00th=[ 83], 00:22:37.895 | 30.00th=[ 130], 40.00th=[ 178], 50.00th=[ 203], 60.00th=[ 241], 00:22:37.895 | 70.00th=[ 275], 80.00th=[ 368], 90.00th=[ 468], 95.00th=[ 531], 00:22:37.895 | 99.00th=[ 1435], 99.50th=[ 1452], 99.90th=[ 1485], 99.95th=[ 1502], 00:22:37.895 | 99.99th=[ 1502] 00:22:37.895 bw ( KiB/s): min=31232, max=121344, per=8.88%, avg=69774.22, stdev=26846.52, samples=18 00:22:37.895 iops : min= 122, max= 474, avg=272.56, stdev=104.87, samples=18 00:22:37.895 lat (msec) : 4=0.08%, 10=0.87%, 20=2.07%, 50=10.13%, 100=9.26% 00:22:37.895 lat (msec) : 250=41.72%, 500=28.96%, 750=2.58%, 1000=1.23%, 2000=3.10% 00:22:37.895 cpu : usr=0.61%, sys=0.99%, ctx=1896, majf=0, minf=1 00:22:37.895 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:22:37.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.895 issued rwts: total=0,2517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.895 job5: (groupid=0, jobs=1): err= 0: pid=391817: Mon May 13 03:03:27 2024 00:22:37.895 write: IOPS=397, BW=99.3MiB/s (104MB/s)(1007MiB/10131msec); 0 zone resets 00:22:37.895 slat (usec): min=19, max=769258, avg=2230.73, stdev=14540.82 00:22:37.895 clat (msec): min=2, max=883, avg=157.94, stdev=124.31 00:22:37.895 lat (msec): min=2, max=883, avg=160.17, stdev=125.59 00:22:37.895 clat percentiles (msec): 00:22:37.895 | 1.00th=[ 14], 5.00th=[ 66], 10.00th=[ 92], 20.00th=[ 107], 00:22:37.895 | 30.00th=[ 116], 40.00th=[ 122], 50.00th=[ 127], 60.00th=[ 134], 00:22:37.895 | 70.00th=[ 144], 80.00th=[ 169], 90.00th=[ 226], 95.00th=[ 359], 00:22:37.895 | 99.00th=[ 776], 99.50th=[ 835], 99.90th=[ 877], 99.95th=[ 885], 00:22:37.895 | 99.99th=[ 885] 00:22:37.895 bw ( KiB/s): min=28672, max=167936, per=13.59%, avg=106792.42, stdev=40712.97, samples=19 00:22:37.895 iops : min= 112, max= 656, avg=417.16, stdev=159.04, samples=19 00:22:37.895 lat (msec) : 4=0.10%, 10=0.55%, 20=0.65%, 50=2.01%, 100=12.02% 00:22:37.895 lat (msec) : 250=75.73%, 500=5.89%, 750=0.72%, 1000=2.33% 00:22:37.895 cpu : usr=1.32%, sys=1.14%, ctx=1485, majf=0, minf=1 00:22:37.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:37.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.895 issued rwts: total=0,4026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.895 job6: (groupid=0, jobs=1): err= 0: pid=391818: Mon May 13 03:03:27 2024 00:22:37.895 write: IOPS=274, BW=68.5MiB/s (71.9MB/s)(701MiB/10226msec); 0 zone resets 00:22:37.895 slat (usec): min=16, max=1342.3k, avg=1548.49, stdev=25726.01 00:22:37.895 clat (msec): min=4, max=1505, avg=231.73, stdev=279.29 00:22:37.895 lat (msec): min=4, max=1506, avg=233.28, stdev=280.86 00:22:37.895 clat percentiles (msec): 00:22:37.895 | 1.00th=[ 13], 5.00th=[ 33], 10.00th=[ 50], 20.00th=[ 62], 00:22:37.895 | 30.00th=[ 74], 40.00th=[ 132], 50.00th=[ 165], 60.00th=[ 182], 00:22:37.895 | 70.00th=[ 209], 80.00th=[ 271], 90.00th=[ 485], 95.00th=[ 844], 00:22:37.895 | 99.00th=[ 1401], 99.50th=[ 1435], 99.90th=[ 1502], 99.95th=[ 1502], 00:22:37.895 | 99.99th=[ 1502] 00:22:37.895 bw ( KiB/s): min=30720, max=122368, per=9.92%, avg=77937.78, stdev=28390.72, samples=18 00:22:37.895 iops : min= 120, max= 478, avg=304.44, stdev=110.90, samples=18 00:22:37.895 lat (msec) : 10=0.50%, 20=2.00%, 50=8.77%, 100=24.47%, 250=42.12% 00:22:37.895 lat (msec) : 500=12.55%, 750=2.46%, 1000=3.03%, 2000=4.10% 00:22:37.895 cpu : usr=0.71%, sys=1.02%, ctx=2171, majf=0, minf=1 00:22:37.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:22:37.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.895 issued rwts: total=0,2804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.895 job7: (groupid=0, jobs=1): err= 0: pid=391819: Mon May 13 03:03:27 2024 00:22:37.895 write: IOPS=266, BW=66.6MiB/s (69.8MB/s)(675MiB/10132msec); 0 zone resets 00:22:37.895 slat (usec): min=23, max=1409.9k, avg=2463.16, stdev=27704.14 00:22:37.895 clat (msec): min=5, max=1581, avg=237.62, stdev=226.81 00:22:37.895 lat (msec): min=5, max=1581, avg=240.08, stdev=228.38 00:22:37.895 clat percentiles (msec): 00:22:37.895 | 1.00th=[ 22], 5.00th=[ 42], 10.00th=[ 80], 20.00th=[ 116], 00:22:37.895 | 30.00th=[ 142], 40.00th=[ 165], 50.00th=[ 182], 60.00th=[ 207], 00:22:37.895 | 70.00th=[ 245], 80.00th=[ 330], 90.00th=[ 414], 95.00th=[ 451], 00:22:37.895 | 99.00th=[ 1519], 99.50th=[ 1536], 99.90th=[ 1586], 99.95th=[ 1586], 00:22:37.895 | 99.99th=[ 1586] 00:22:37.895 bw ( KiB/s): min=14336, max=194560, per=9.54%, avg=74951.11, stdev=39020.49, samples=18 00:22:37.895 iops : min= 56, max= 760, avg=292.78, stdev=152.42, samples=18 00:22:37.895 lat (msec) : 10=0.19%, 20=0.67%, 50=5.63%, 100=8.86%, 250=56.26% 00:22:37.895 lat (msec) : 500=24.83%, 750=1.22%, 2000=2.34% 00:22:37.895 cpu : usr=0.70%, sys=0.85%, ctx=1503, majf=0, minf=1 00:22:37.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:22:37.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.895 issued rwts: total=0,2698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.895 job8: (groupid=0, jobs=1): err= 0: pid=391820: Mon May 13 03:03:27 2024 00:22:37.895 write: IOPS=249, BW=62.4MiB/s (65.5MB/s)(633MiB/10135msec); 0 zone resets 00:22:37.895 slat (usec): min=23, max=483521, avg=3093.34, stdev=16707.81 00:22:37.895 clat (msec): min=6, max=1111, avg=253.04, stdev=201.61 00:22:37.896 lat (msec): min=6, max=1111, avg=256.13, stdev=203.15 00:22:37.896 clat percentiles (msec): 00:22:37.896 | 1.00th=[ 16], 5.00th=[ 44], 10.00th=[ 89], 20.00th=[ 142], 00:22:37.896 | 30.00th=[ 150], 40.00th=[ 167], 50.00th=[ 180], 60.00th=[ 213], 00:22:37.896 | 70.00th=[ 251], 80.00th=[ 326], 90.00th=[ 550], 95.00th=[ 726], 00:22:37.896 | 99.00th=[ 1020], 99.50th=[ 1036], 99.90th=[ 1116], 99.95th=[ 1116], 00:22:37.896 | 99.99th=[ 1116] 00:22:37.896 bw ( KiB/s): min=12800, max=112640, per=8.04%, avg=63180.80, stdev=32777.01, samples=20 00:22:37.896 iops : min= 50, max= 440, avg=246.80, stdev=128.04, samples=20 00:22:37.896 lat (msec) : 10=0.24%, 20=1.19%, 50=5.22%, 100=4.39%, 250=58.71% 00:22:37.896 lat (msec) : 500=19.20%, 750=6.32%, 1000=3.28%, 2000=1.46% 00:22:37.896 cpu : usr=0.75%, sys=0.74%, ctx=1218, majf=0, minf=1 00:22:37.896 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:22:37.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.896 issued rwts: total=0,2531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.896 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.896 job9: (groupid=0, jobs=1): err= 0: pid=391821: Mon May 13 03:03:27 2024 00:22:37.896 write: IOPS=270, BW=67.7MiB/s (71.0MB/s)(694MiB/10246msec); 0 zone resets 00:22:37.896 slat (usec): min=25, max=103377, avg=3410.36, stdev=7348.94 00:22:37.896 clat (msec): min=18, max=579, avg=232.62, stdev=99.40 00:22:37.896 lat (msec): min=18, max=579, avg=236.03, stdev=100.58 00:22:37.896 clat percentiles (msec): 00:22:37.896 | 1.00th=[ 53], 5.00th=[ 120], 10.00th=[ 136], 20.00th=[ 138], 00:22:37.896 | 30.00th=[ 176], 40.00th=[ 188], 50.00th=[ 213], 60.00th=[ 239], 00:22:37.896 | 70.00th=[ 264], 80.00th=[ 309], 90.00th=[ 388], 95.00th=[ 422], 00:22:37.896 | 99.00th=[ 518], 99.50th=[ 527], 99.90th=[ 558], 99.95th=[ 558], 00:22:37.896 | 99.99th=[ 584] 00:22:37.896 bw ( KiB/s): min=30720, max=118784, per=8.84%, avg=69418.15, stdev=25431.74, samples=20 00:22:37.896 iops : min= 120, max= 464, avg=271.15, stdev=99.33, samples=20 00:22:37.896 lat (msec) : 20=0.14%, 50=0.61%, 100=2.95%, 250=62.28%, 500=32.35% 00:22:37.896 lat (msec) : 750=1.66% 00:22:37.896 cpu : usr=0.76%, sys=0.88%, ctx=892, majf=0, minf=1 00:22:37.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:22:37.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.896 issued rwts: total=0,2776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.896 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.896 job10: (groupid=0, jobs=1): err= 0: pid=391822: Mon May 13 03:03:27 2024 00:22:37.896 write: IOPS=354, BW=88.6MiB/s (92.9MB/s)(897MiB/10128msec); 0 zone resets 00:22:37.896 slat (usec): min=20, max=1011.9k, avg=2148.43, stdev=17435.58 00:22:37.896 clat (msec): min=14, max=1342, avg=178.39, stdev=153.79 00:22:37.896 lat (msec): min=14, max=1346, avg=180.54, stdev=154.54 00:22:37.896 clat percentiles (msec): 00:22:37.896 | 1.00th=[ 50], 5.00th=[ 99], 10.00th=[ 108], 20.00th=[ 118], 00:22:37.896 | 30.00th=[ 127], 40.00th=[ 136], 50.00th=[ 148], 60.00th=[ 165], 00:22:37.896 | 70.00th=[ 178], 80.00th=[ 192], 90.00th=[ 239], 95.00th=[ 279], 00:22:37.896 | 99.00th=[ 1167], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:22:37.896 | 99.99th=[ 1334] 00:22:37.896 bw ( KiB/s): min=12288, max=135168, per=12.09%, avg=95005.21, stdev=27379.14, samples=19 00:22:37.896 iops : min= 48, max= 528, avg=371.11, stdev=106.95, samples=19 00:22:37.896 lat (msec) : 20=0.03%, 50=0.98%, 100=4.46%, 250=85.90%, 500=6.27% 00:22:37.896 lat (msec) : 750=0.61%, 2000=1.76% 00:22:37.896 cpu : usr=1.01%, sys=1.18%, ctx=1576, majf=0, minf=1 00:22:37.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:22:37.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:37.896 issued rwts: total=0,3589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.896 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.896 00:22:37.896 Run status group 0 (all jobs): 00:22:37.896 WRITE: bw=767MiB/s (804MB/s), 48.1MiB/s-99.3MiB/s (50.4MB/s-104MB/s), io=7860MiB (8242MB), run=10128-10246msec 00:22:37.896 00:22:37.896 Disk stats (read/write): 00:22:37.896 nvme0n1: ios=49/4650, merge=0/0, ticks=67/1250092, in_queue=1250159, util=97.44% 00:22:37.896 nvme10n1: ios=49/3721, merge=0/0, ticks=3553/1152802, in_queue=1156355, util=99.23% 00:22:37.896 nvme1n1: ios=49/6752, merge=0/0, ticks=3379/1196569, in_queue=1199948, util=99.51% 00:22:37.896 nvme2n1: ios=40/5434, merge=0/0, ticks=742/1229818, in_queue=1230560, util=99.83% 00:22:37.896 nvme3n1: ios=53/4996, merge=0/0, ticks=2964/1255288, in_queue=1258252, util=99.89% 00:22:37.896 nvme4n1: ios=43/7865, merge=0/0, ticks=2967/1107837, in_queue=1110804, util=99.81% 00:22:37.896 nvme5n1: ios=20/5570, merge=0/0, ticks=40/1258457, in_queue=1258497, util=98.32% 00:22:37.896 nvme6n1: ios=51/5197, merge=0/0, ticks=3285/1222944, in_queue=1226229, util=100.00% 00:22:37.896 nvme7n1: ios=26/4884, merge=0/0, ticks=691/1218355, in_queue=1219046, util=99.88% 00:22:37.896 nvme8n1: ios=44/5479, merge=0/0, ticks=694/1224615, in_queue=1225309, util=100.00% 00:22:37.896 nvme9n1: ios=0/6980, merge=0/0, ticks=0/1217235, in_queue=1217235, util=99.06% 00:22:37.896 03:03:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:22:37.896 03:03:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:22:37.896 03:03:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:37.896 03:03:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:37.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:37.896 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:37.896 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:37.896 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:38.155 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.155 03:03:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:38.413 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:38.413 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.414 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:38.672 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.672 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:38.930 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.930 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:39.216 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:39.216 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:39.216 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:39.216 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.217 03:03:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:39.483 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.483 rmmod nvme_tcp 00:22:39.483 rmmod nvme_fabrics 00:22:39.483 rmmod nvme_keyring 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 386155 ']' 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 386155 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 386155 ']' 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 386155 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 386155 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 386155' 00:22:39.483 killing process with pid 386155 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 386155 00:22:39.483 [2024-05-13 03:03:30.189358] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:39.483 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 386155 00:22:40.050 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.050 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.050 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.050 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.050 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.050 03:03:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.050 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.050 03:03:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.583 03:03:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.583 00:22:42.583 real 0m59.738s 00:22:42.583 user 2m58.886s 00:22:42.583 sys 0m22.151s 00:22:42.583 03:03:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:42.583 03:03:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:42.583 ************************************ 00:22:42.583 END TEST nvmf_multiconnection 00:22:42.583 ************************************ 00:22:42.583 03:03:32 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:42.583 03:03:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:42.583 03:03:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:42.583 03:03:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.583 ************************************ 00:22:42.583 START TEST nvmf_initiator_timeout 00:22:42.583 ************************************ 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:42.583 * Looking for test storage... 00:22:42.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.583 03:03:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:22:44.489 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:44.490 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:44.490 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:44.490 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:44.490 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.490 03:03:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:22:44.490 00:22:44.490 --- 10.0.0.2 ping statistics --- 00:22:44.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.490 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:22:44.490 00:22:44.490 --- 10.0.0.1 ping statistics --- 00:22:44.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.490 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=394864 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 394864 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 394864 ']' 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:44.490 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.491 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:44.491 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.491 [2024-05-13 03:03:35.134866] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:22:44.491 [2024-05-13 03:03:35.134942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.491 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.491 [2024-05-13 03:03:35.173666] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:44.491 [2024-05-13 03:03:35.200729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.749 [2024-05-13 03:03:35.296901] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.749 [2024-05-13 03:03:35.296945] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.749 [2024-05-13 03:03:35.296959] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.749 [2024-05-13 03:03:35.296970] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.749 [2024-05-13 03:03:35.296980] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.749 [2024-05-13 03:03:35.300718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.749 [2024-05-13 03:03:35.300861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.749 [2024-05-13 03:03:35.300910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.749 [2024-05-13 03:03:35.300914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.749 Malloc0 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.749 Delay0 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.749 [2024-05-13 03:03:35.483846] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.749 [2024-05-13 03:03:35.511861] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:44.749 [2024-05-13 03:03:35.512159] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.749 03:03:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:45.315 03:03:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:45.315 03:03:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:22:45.315 03:03:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:45.315 03:03:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:45.315 03:03:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:22:47.840 03:03:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:47.840 03:03:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:47.840 03:03:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:22:47.840 03:03:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:47.840 03:03:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:47.840 03:03:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:22:47.840 03:03:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=395286 00:22:47.840 03:03:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:47.840 03:03:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:47.840 [global] 00:22:47.840 thread=1 00:22:47.840 invalidate=1 00:22:47.840 rw=write 00:22:47.840 time_based=1 00:22:47.840 runtime=60 00:22:47.840 ioengine=libaio 00:22:47.840 direct=1 00:22:47.840 bs=4096 00:22:47.840 iodepth=1 00:22:47.840 norandommap=0 00:22:47.840 numjobs=1 00:22:47.840 00:22:47.840 verify_dump=1 00:22:47.840 verify_backlog=512 00:22:47.840 verify_state_save=0 00:22:47.840 do_verify=1 00:22:47.840 verify=crc32c-intel 00:22:47.840 [job0] 00:22:47.840 filename=/dev/nvme0n1 00:22:47.840 Could not set queue depth (nvme0n1) 00:22:47.840 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:47.840 fio-3.35 00:22:47.840 Starting 1 thread 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:50.366 true 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:50.366 true 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:50.366 true 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:50.366 true 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.366 03:03:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:53.647 true 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:53.647 true 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:53.647 true 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:53.647 true 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:53.647 03:03:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 395286 00:23:49.883 00:23:49.883 job0: (groupid=0, jobs=1): err= 0: pid=395360: Mon May 13 03:04:38 2024 00:23:49.883 read: IOPS=30, BW=124KiB/s (127kB/s)(7420KiB/60019msec) 00:23:49.883 slat (usec): min=6, max=10778, avg=34.62, stdev=303.27 00:23:49.883 clat (usec): min=442, max=40954k, avg=31818.74, stdev=950793.02 00:23:49.883 lat (usec): min=463, max=40954k, avg=31853.35, stdev=950792.54 00:23:49.883 clat percentiles (usec): 00:23:49.883 | 1.00th=[ 465], 5.00th=[ 482], 10.00th=[ 490], 00:23:49.883 | 20.00th=[ 502], 30.00th=[ 519], 40.00th=[ 529], 00:23:49.883 | 50.00th=[ 537], 60.00th=[ 545], 70.00th=[ 594], 00:23:49.883 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:23:49.883 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:23:49.883 | 99.95th=[17112761], 99.99th=[17112761] 00:23:49.883 write: IOPS=34, BW=136KiB/s (140kB/s)(8192KiB/60019msec); 0 zone resets 00:23:49.883 slat (nsec): min=8232, max=91487, avg=33624.68, stdev=12586.47 00:23:49.883 clat (usec): min=286, max=1697, avg=406.31, stdev=81.39 00:23:49.883 lat (usec): min=294, max=1736, avg=439.94, stdev=88.52 00:23:49.883 clat percentiles (usec): 00:23:49.883 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 343], 00:23:49.883 | 30.00th=[ 359], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 404], 00:23:49.883 | 70.00th=[ 420], 80.00th=[ 453], 90.00th=[ 537], 95.00th=[ 570], 00:23:49.883 | 99.00th=[ 627], 99.50th=[ 635], 99.90th=[ 701], 99.95th=[ 701], 00:23:49.883 | 99.99th=[ 1696] 00:23:49.883 bw ( KiB/s): min= 1304, max= 4552, per=100.00%, avg=3276.80, stdev=1391.52, samples=5 00:23:49.883 iops : min= 326, max= 1138, avg=819.20, stdev=347.88, samples=5 00:23:49.883 lat (usec) : 500=54.42%, 750=34.56%, 1000=0.13% 00:23:49.883 lat (msec) : 2=0.03%, 10=0.05%, 50=10.79%, >=2000=0.03% 00:23:49.883 cpu : usr=0.09%, sys=0.23%, ctx=3906, majf=0, minf=2 00:23:49.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:49.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.883 issued rwts: total=1855,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:49.883 00:23:49.883 Run status group 0 (all jobs): 00:23:49.883 READ: bw=124KiB/s (127kB/s), 124KiB/s-124KiB/s (127kB/s-127kB/s), io=7420KiB (7598kB), run=60019-60019msec 00:23:49.883 WRITE: bw=136KiB/s (140kB/s), 136KiB/s-136KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60019-60019msec 00:23:49.883 00:23:49.883 Disk stats (read/write): 00:23:49.883 nvme0n1: ios=1932/2048, merge=0/0, ticks=18320/767, in_queue=19087, util=99.92% 00:23:49.883 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:49.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:49.883 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:49.883 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:23:49.883 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:49.884 nvmf hotplug test: fio successful as expected 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.884 rmmod nvme_tcp 00:23:49.884 rmmod nvme_fabrics 00:23:49.884 rmmod nvme_keyring 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 394864 ']' 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 394864 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 394864 ']' 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 394864 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 394864 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 394864' 00:23:49.884 killing process with pid 394864 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 394864 00:23:49.884 [2024-05-13 03:04:38.631530] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 394864 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.884 03:04:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.143 03:04:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.143 00:23:50.143 real 1m8.112s 00:23:50.143 user 4m9.035s 00:23:50.143 sys 0m7.143s 00:23:50.143 03:04:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:50.143 03:04:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:50.143 ************************************ 00:23:50.143 END TEST nvmf_initiator_timeout 00:23:50.143 ************************************ 00:23:50.402 03:04:40 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:23:50.402 03:04:40 nvmf_tcp -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:23:50.402 03:04:40 nvmf_tcp -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:23:50.402 03:04:40 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.402 03:04:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:52.301 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:52.301 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.301 03:04:43 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:52.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:52.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:23:52.302 03:04:43 nvmf_tcp -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:52.302 03:04:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:52.302 03:04:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:52.302 03:04:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.302 ************************************ 00:23:52.302 START TEST nvmf_perf_adq 00:23:52.302 ************************************ 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:52.302 * Looking for test storage... 00:23:52.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.302 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.561 03:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:54.460 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:54.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:54.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:54.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:54.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:54.461 03:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:55.028 03:04:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:56.403 03:04:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:01.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:01.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:01.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.673 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:01.674 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:24:01.674 00:24:01.674 --- 10.0.0.2 ping statistics --- 00:24:01.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.674 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:24:01.674 00:24:01.674 --- 10.0.0.1 ping statistics --- 00:24:01.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.674 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=406782 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 406782 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 406782 ']' 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:01.674 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.674 [2024-05-13 03:04:52.281561] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:24:01.674 [2024-05-13 03:04:52.281631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.674 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.674 [2024-05-13 03:04:52.320395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:01.674 [2024-05-13 03:04:52.346554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.674 [2024-05-13 03:04:52.431316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.674 [2024-05-13 03:04:52.431367] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.674 [2024-05-13 03:04:52.431386] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.674 [2024-05-13 03:04:52.431397] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.674 [2024-05-13 03:04:52.431407] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.674 [2024-05-13 03:04:52.431490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.674 [2024-05-13 03:04:52.431519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.674 [2024-05-13 03:04:52.431574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.674 [2024-05-13 03:04:52.431576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 [2024-05-13 03:04:52.658253] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 Malloc1 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.933 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 [2024-05-13 03:04:52.708911] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:01.933 [2024-05-13 03:04:52.709222] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.933 03:04:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=406912 00:24:01.933 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:01.933 03:04:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:24:02.191 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:24:04.090 "tick_rate": 2700000000, 00:24:04.090 "poll_groups": [ 00:24:04.090 { 00:24:04.090 "name": "nvmf_tgt_poll_group_000", 00:24:04.090 "admin_qpairs": 1, 00:24:04.090 "io_qpairs": 1, 00:24:04.090 "current_admin_qpairs": 1, 00:24:04.090 "current_io_qpairs": 1, 00:24:04.090 "pending_bdev_io": 0, 00:24:04.090 "completed_nvme_io": 19481, 00:24:04.090 "transports": [ 00:24:04.090 { 00:24:04.090 "trtype": "TCP" 00:24:04.090 } 00:24:04.090 ] 00:24:04.090 }, 00:24:04.090 { 00:24:04.090 "name": "nvmf_tgt_poll_group_001", 00:24:04.090 "admin_qpairs": 0, 00:24:04.090 "io_qpairs": 1, 00:24:04.090 "current_admin_qpairs": 0, 00:24:04.090 "current_io_qpairs": 1, 00:24:04.090 "pending_bdev_io": 0, 00:24:04.090 "completed_nvme_io": 16897, 00:24:04.090 "transports": [ 00:24:04.090 { 00:24:04.090 "trtype": "TCP" 00:24:04.090 } 00:24:04.090 ] 00:24:04.090 }, 00:24:04.090 { 00:24:04.090 "name": "nvmf_tgt_poll_group_002", 00:24:04.090 "admin_qpairs": 0, 00:24:04.090 "io_qpairs": 1, 00:24:04.090 "current_admin_qpairs": 0, 00:24:04.090 "current_io_qpairs": 1, 00:24:04.090 "pending_bdev_io": 0, 00:24:04.090 "completed_nvme_io": 21479, 00:24:04.090 "transports": [ 00:24:04.090 { 00:24:04.090 "trtype": "TCP" 00:24:04.090 } 00:24:04.090 ] 00:24:04.090 }, 00:24:04.090 { 00:24:04.090 "name": "nvmf_tgt_poll_group_003", 00:24:04.090 "admin_qpairs": 0, 00:24:04.090 "io_qpairs": 1, 00:24:04.090 "current_admin_qpairs": 0, 00:24:04.090 "current_io_qpairs": 1, 00:24:04.090 "pending_bdev_io": 0, 00:24:04.090 "completed_nvme_io": 21236, 00:24:04.090 "transports": [ 00:24:04.090 { 00:24:04.090 "trtype": "TCP" 00:24:04.090 } 00:24:04.090 ] 00:24:04.090 } 00:24:04.090 ] 00:24:04.090 }' 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:24:04.090 03:04:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 406912 00:24:12.238 Initializing NVMe Controllers 00:24:12.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:12.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:12.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:12.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:12.239 Initialization complete. Launching workers. 00:24:12.239 ======================================================== 00:24:12.239 Latency(us) 00:24:12.239 Device Information : IOPS MiB/s Average min max 00:24:12.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11125.40 43.46 5752.98 2086.86 9195.44 00:24:12.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8892.80 34.74 7198.33 3220.92 11214.39 00:24:12.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11265.00 44.00 5681.63 3278.59 8949.20 00:24:12.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10243.90 40.02 6248.01 3400.99 8747.66 00:24:12.239 ======================================================== 00:24:12.239 Total : 41527.10 162.22 6165.25 2086.86 11214.39 00:24:12.239 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.239 rmmod nvme_tcp 00:24:12.239 rmmod nvme_fabrics 00:24:12.239 rmmod nvme_keyring 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 406782 ']' 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 406782 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 406782 ']' 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 406782 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 406782 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 406782' 00:24:12.239 killing process with pid 406782 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 406782 00:24:12.239 [2024-05-13 03:05:02.921056] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:12.239 03:05:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 406782 00:24:12.498 03:05:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.498 03:05:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.498 03:05:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.498 03:05:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.498 03:05:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.498 03:05:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.498 03:05:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.499 03:05:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.404 03:05:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.404 03:05:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:24:14.404 03:05:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:24:14.972 03:05:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:24:16.875 03:05:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:22.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:22.149 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:22.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:22.149 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.149 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:22.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:24:22.150 00:24:22.150 --- 10.0.0.2 ping statistics --- 00:24:22.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.150 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:24:22.150 00:24:22.150 --- 10.0.0.1 ping statistics --- 00:24:22.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.150 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:22.150 net.core.busy_poll = 1 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:22.150 net.core.busy_read = 1 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=409402 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 409402 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 409402 ']' 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 [2024-05-13 03:05:12.578353] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:24:22.150 [2024-05-13 03:05:12.578425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.150 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.150 [2024-05-13 03:05:12.615941] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:22.150 [2024-05-13 03:05:12.641989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:22.150 [2024-05-13 03:05:12.728315] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.150 [2024-05-13 03:05:12.728367] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.150 [2024-05-13 03:05:12.728389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.150 [2024-05-13 03:05:12.728399] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.150 [2024-05-13 03:05:12.728408] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.150 [2024-05-13 03:05:12.728504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.150 [2024-05-13 03:05:12.728569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.150 [2024-05-13 03:05:12.728635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.150 [2024-05-13 03:05:12.728637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.150 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.409 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.409 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:22.409 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.409 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.409 [2024-05-13 03:05:12.975623] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.409 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.409 03:05:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:22.409 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.409 03:05:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.409 Malloc1 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:22.409 [2024-05-13 03:05:13.028568] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:22.409 [2024-05-13 03:05:13.028915] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=409549 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:24:22.409 03:05:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:22.409 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.311 03:05:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:24:24.311 03:05:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.311 03:05:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.311 03:05:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.311 03:05:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:24:24.311 "tick_rate": 2700000000, 00:24:24.311 "poll_groups": [ 00:24:24.311 { 00:24:24.311 "name": "nvmf_tgt_poll_group_000", 00:24:24.311 "admin_qpairs": 1, 00:24:24.311 "io_qpairs": 1, 00:24:24.311 "current_admin_qpairs": 1, 00:24:24.311 "current_io_qpairs": 1, 00:24:24.311 "pending_bdev_io": 0, 00:24:24.311 "completed_nvme_io": 22412, 00:24:24.311 "transports": [ 00:24:24.311 { 00:24:24.311 "trtype": "TCP" 00:24:24.311 } 00:24:24.311 ] 00:24:24.311 }, 00:24:24.311 { 00:24:24.311 "name": "nvmf_tgt_poll_group_001", 00:24:24.311 "admin_qpairs": 0, 00:24:24.311 "io_qpairs": 3, 00:24:24.311 "current_admin_qpairs": 0, 00:24:24.311 "current_io_qpairs": 3, 00:24:24.311 "pending_bdev_io": 0, 00:24:24.311 "completed_nvme_io": 27695, 00:24:24.312 "transports": [ 00:24:24.312 { 00:24:24.312 "trtype": "TCP" 00:24:24.312 } 00:24:24.312 ] 00:24:24.312 }, 00:24:24.312 { 00:24:24.312 "name": "nvmf_tgt_poll_group_002", 00:24:24.312 "admin_qpairs": 0, 00:24:24.312 "io_qpairs": 0, 00:24:24.312 "current_admin_qpairs": 0, 00:24:24.312 "current_io_qpairs": 0, 00:24:24.312 "pending_bdev_io": 0, 00:24:24.312 "completed_nvme_io": 0, 00:24:24.312 "transports": [ 00:24:24.312 { 00:24:24.312 "trtype": "TCP" 00:24:24.312 } 00:24:24.312 ] 00:24:24.312 }, 00:24:24.312 { 00:24:24.312 "name": "nvmf_tgt_poll_group_003", 00:24:24.312 "admin_qpairs": 0, 00:24:24.312 "io_qpairs": 0, 00:24:24.312 "current_admin_qpairs": 0, 00:24:24.312 "current_io_qpairs": 0, 00:24:24.312 "pending_bdev_io": 0, 00:24:24.312 "completed_nvme_io": 0, 00:24:24.312 "transports": [ 00:24:24.312 { 00:24:24.312 "trtype": "TCP" 00:24:24.312 } 00:24:24.312 ] 00:24:24.312 } 00:24:24.312 ] 00:24:24.312 }' 00:24:24.312 03:05:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:24.312 03:05:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:24:24.312 03:05:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:24:24.312 03:05:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:24:24.312 03:05:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 409549 00:24:34.283 Initializing NVMe Controllers 00:24:34.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:34.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:34.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:34.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:34.283 Initialization complete. Launching workers. 00:24:34.283 ======================================================== 00:24:34.283 Latency(us) 00:24:34.283 Device Information : IOPS MiB/s Average min max 00:24:34.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5071.80 19.81 12646.42 2048.80 58794.20 00:24:34.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12072.30 47.16 5302.51 1860.00 47458.58 00:24:34.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4628.10 18.08 13880.59 1955.82 59533.00 00:24:34.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5083.20 19.86 12592.47 2183.49 57279.39 00:24:34.284 ======================================================== 00:24:34.284 Total : 26855.40 104.90 9547.59 1860.00 59533.00 00:24:34.284 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.284 rmmod nvme_tcp 00:24:34.284 rmmod nvme_fabrics 00:24:34.284 rmmod nvme_keyring 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 409402 ']' 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 409402 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 409402 ']' 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 409402 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 409402 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 409402' 00:24:34.284 killing process with pid 409402 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 409402 00:24:34.284 [2024-05-13 03:05:23.319778] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 409402 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.284 03:05:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.214 03:05:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:36.214 03:05:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:36.214 00:24:36.214 real 0m43.558s 00:24:36.214 user 2m35.219s 00:24:36.214 sys 0m10.844s 00:24:36.214 03:05:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:36.214 03:05:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:36.214 ************************************ 00:24:36.214 END TEST nvmf_perf_adq 00:24:36.214 ************************************ 00:24:36.214 03:05:26 nvmf_tcp -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:36.214 03:05:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:36.214 03:05:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:36.214 03:05:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.214 ************************************ 00:24:36.214 START TEST nvmf_shutdown 00:24:36.214 ************************************ 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:36.214 * Looking for test storage... 00:24:36.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:36.214 ************************************ 00:24:36.214 START TEST nvmf_shutdown_tc1 00:24:36.214 ************************************ 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.214 03:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:38.118 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:38.118 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.118 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:38.119 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:38.119 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:24:38.119 00:24:38.119 --- 10.0.0.2 ping statistics --- 00:24:38.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.119 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:24:38.119 00:24:38.119 --- 10.0.0.1 ping statistics --- 00:24:38.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.119 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=412835 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 412835 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 412835 ']' 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:38.119 03:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:38.119 [2024-05-13 03:05:28.861010] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:24:38.119 [2024-05-13 03:05:28.861083] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.119 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.119 [2024-05-13 03:05:28.897942] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:38.378 [2024-05-13 03:05:28.925736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.378 [2024-05-13 03:05:29.010859] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.378 [2024-05-13 03:05:29.010909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.378 [2024-05-13 03:05:29.010922] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.378 [2024-05-13 03:05:29.010933] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.378 [2024-05-13 03:05:29.010943] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.378 [2024-05-13 03:05:29.011006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.378 [2024-05-13 03:05:29.011066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.378 [2024-05-13 03:05:29.011140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:38.378 [2024-05-13 03:05:29.011142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:38.378 [2024-05-13 03:05:29.164561] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:38.378 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.637 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:38.637 Malloc1 00:24:38.637 [2024-05-13 03:05:29.253914] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:38.637 [2024-05-13 03:05:29.254210] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.637 Malloc2 00:24:38.637 Malloc3 00:24:38.637 Malloc4 00:24:38.637 Malloc5 00:24:38.896 Malloc6 00:24:38.896 Malloc7 00:24:38.896 Malloc8 00:24:38.896 Malloc9 00:24:38.896 Malloc10 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=412933 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 412933 /var/tmp/bdevperf.sock 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 412933 ']' 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:39.154 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:39.155 { 00:24:39.155 "params": { 00:24:39.155 "name": "Nvme$subsystem", 00:24:39.155 "trtype": "$TEST_TRANSPORT", 00:24:39.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.155 "adrfam": "ipv4", 00:24:39.155 "trsvcid": "$NVMF_PORT", 00:24:39.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.155 "hdgst": ${hdgst:-false}, 00:24:39.155 "ddgst": ${ddgst:-false} 00:24:39.155 }, 00:24:39.155 "method": "bdev_nvme_attach_controller" 00:24:39.155 } 00:24:39.155 EOF 00:24:39.155 )") 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:39.155 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:39.156 03:05:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme1", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 },{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme2", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 },{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme3", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 },{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme4", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 },{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme5", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 },{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme6", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 },{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme7", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 },{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme8", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 },{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme9", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 },{ 00:24:39.156 "params": { 00:24:39.156 "name": "Nvme10", 00:24:39.156 "trtype": "tcp", 00:24:39.156 "traddr": "10.0.0.2", 00:24:39.156 "adrfam": "ipv4", 00:24:39.156 "trsvcid": "4420", 00:24:39.156 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:39.156 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:39.156 "hdgst": false, 00:24:39.156 "ddgst": false 00:24:39.156 }, 00:24:39.156 "method": "bdev_nvme_attach_controller" 00:24:39.156 }' 00:24:39.156 [2024-05-13 03:05:29.769955] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:24:39.156 [2024-05-13 03:05:29.770046] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:39.156 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.156 [2024-05-13 03:05:29.807171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:39.156 [2024-05-13 03:05:29.836321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.156 [2024-05-13 03:05:29.923375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.054 03:05:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:41.054 03:05:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:24:41.054 03:05:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:41.054 03:05:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.054 03:05:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:41.054 03:05:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.054 03:05:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 412933 00:24:41.054 03:05:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:41.054 03:05:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:24:41.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 412933 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 412835 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.987 { 00:24:41.987 "params": { 00:24:41.987 "name": "Nvme$subsystem", 00:24:41.987 "trtype": "$TEST_TRANSPORT", 00:24:41.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.987 "adrfam": "ipv4", 00:24:41.987 "trsvcid": "$NVMF_PORT", 00:24:41.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.987 "hdgst": ${hdgst:-false}, 00:24:41.987 "ddgst": ${ddgst:-false} 00:24:41.987 }, 00:24:41.987 "method": "bdev_nvme_attach_controller" 00:24:41.987 } 00:24:41.987 EOF 00:24:41.987 )") 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.987 { 00:24:41.987 "params": { 00:24:41.987 "name": "Nvme$subsystem", 00:24:41.987 "trtype": "$TEST_TRANSPORT", 00:24:41.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.987 "adrfam": "ipv4", 00:24:41.987 "trsvcid": "$NVMF_PORT", 00:24:41.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.987 "hdgst": ${hdgst:-false}, 00:24:41.987 "ddgst": ${ddgst:-false} 00:24:41.987 }, 00:24:41.987 "method": "bdev_nvme_attach_controller" 00:24:41.987 } 00:24:41.987 EOF 00:24:41.987 )") 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.987 { 00:24:41.987 "params": { 00:24:41.987 "name": "Nvme$subsystem", 00:24:41.987 "trtype": "$TEST_TRANSPORT", 00:24:41.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.987 "adrfam": "ipv4", 00:24:41.987 "trsvcid": "$NVMF_PORT", 00:24:41.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.987 "hdgst": ${hdgst:-false}, 00:24:41.987 "ddgst": ${ddgst:-false} 00:24:41.987 }, 00:24:41.987 "method": "bdev_nvme_attach_controller" 00:24:41.987 } 00:24:41.987 EOF 00:24:41.987 )") 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.987 { 00:24:41.987 "params": { 00:24:41.987 "name": "Nvme$subsystem", 00:24:41.987 "trtype": "$TEST_TRANSPORT", 00:24:41.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.987 "adrfam": "ipv4", 00:24:41.987 "trsvcid": "$NVMF_PORT", 00:24:41.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.987 "hdgst": ${hdgst:-false}, 00:24:41.987 "ddgst": ${ddgst:-false} 00:24:41.987 }, 00:24:41.987 "method": "bdev_nvme_attach_controller" 00:24:41.987 } 00:24:41.987 EOF 00:24:41.987 )") 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.987 { 00:24:41.987 "params": { 00:24:41.987 "name": "Nvme$subsystem", 00:24:41.987 "trtype": "$TEST_TRANSPORT", 00:24:41.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.987 "adrfam": "ipv4", 00:24:41.987 "trsvcid": "$NVMF_PORT", 00:24:41.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.987 "hdgst": ${hdgst:-false}, 00:24:41.987 "ddgst": ${ddgst:-false} 00:24:41.987 }, 00:24:41.987 "method": "bdev_nvme_attach_controller" 00:24:41.987 } 00:24:41.987 EOF 00:24:41.987 )") 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.987 { 00:24:41.987 "params": { 00:24:41.987 "name": "Nvme$subsystem", 00:24:41.987 "trtype": "$TEST_TRANSPORT", 00:24:41.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.987 "adrfam": "ipv4", 00:24:41.987 "trsvcid": "$NVMF_PORT", 00:24:41.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.987 "hdgst": ${hdgst:-false}, 00:24:41.987 "ddgst": ${ddgst:-false} 00:24:41.987 }, 00:24:41.987 "method": "bdev_nvme_attach_controller" 00:24:41.987 } 00:24:41.987 EOF 00:24:41.987 )") 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.987 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.987 { 00:24:41.987 "params": { 00:24:41.987 "name": "Nvme$subsystem", 00:24:41.987 "trtype": "$TEST_TRANSPORT", 00:24:41.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.987 "adrfam": "ipv4", 00:24:41.987 "trsvcid": "$NVMF_PORT", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.988 "hdgst": ${hdgst:-false}, 00:24:41.988 "ddgst": ${ddgst:-false} 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 } 00:24:41.988 EOF 00:24:41.988 )") 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.988 { 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme$subsystem", 00:24:41.988 "trtype": "$TEST_TRANSPORT", 00:24:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "$NVMF_PORT", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.988 "hdgst": ${hdgst:-false}, 00:24:41.988 "ddgst": ${ddgst:-false} 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 } 00:24:41.988 EOF 00:24:41.988 )") 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.988 { 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme$subsystem", 00:24:41.988 "trtype": "$TEST_TRANSPORT", 00:24:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "$NVMF_PORT", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.988 "hdgst": ${hdgst:-false}, 00:24:41.988 "ddgst": ${ddgst:-false} 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 } 00:24:41.988 EOF 00:24:41.988 )") 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:41.988 { 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme$subsystem", 00:24:41.988 "trtype": "$TEST_TRANSPORT", 00:24:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "$NVMF_PORT", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.988 "hdgst": ${hdgst:-false}, 00:24:41.988 "ddgst": ${ddgst:-false} 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 } 00:24:41.988 EOF 00:24:41.988 )") 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:41.988 03:05:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme1", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 },{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme2", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 },{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme3", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 },{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme4", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 },{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme5", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 },{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme6", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 },{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme7", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 },{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme8", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 },{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme9", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 },{ 00:24:41.988 "params": { 00:24:41.988 "name": "Nvme10", 00:24:41.988 "trtype": "tcp", 00:24:41.988 "traddr": "10.0.0.2", 00:24:41.988 "adrfam": "ipv4", 00:24:41.988 "trsvcid": "4420", 00:24:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:41.988 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:41.988 "hdgst": false, 00:24:41.988 "ddgst": false 00:24:41.988 }, 00:24:41.988 "method": "bdev_nvme_attach_controller" 00:24:41.988 }' 00:24:42.247 [2024-05-13 03:05:32.791652] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:24:42.247 [2024-05-13 03:05:32.791777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413319 ] 00:24:42.247 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.247 [2024-05-13 03:05:32.828634] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:42.247 [2024-05-13 03:05:32.857574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.247 [2024-05-13 03:05:32.944233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.618 Running I/O for 1 seconds... 00:24:45.000 00:24:45.000 Latency(us) 00:24:45.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.000 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme1n1 : 1.19 107.49 6.72 0.00 0.00 590114.51 45632.47 472247.56 00:24:45.000 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme2n1 : 1.22 157.37 9.84 0.00 0.00 396913.52 84662.80 338651.21 00:24:45.000 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme3n1 : 1.22 157.07 9.82 0.00 0.00 391276.22 72235.24 351078.78 00:24:45.000 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme4n1 : 1.23 155.68 9.73 0.00 0.00 388780.94 24175.50 400789.05 00:24:45.000 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme5n1 : 1.14 112.53 7.03 0.00 0.00 524949.43 47768.46 441178.64 00:24:45.000 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme6n1 : 1.24 155.20 9.70 0.00 0.00 377777.11 24855.13 422537.29 00:24:45.000 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme7n1 : 1.24 154.39 9.65 0.00 0.00 374056.08 24466.77 425644.18 00:24:45.000 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme8n1 : 1.22 157.73 9.86 0.00 0.00 359260.67 96702.01 326223.64 00:24:45.000 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme9n1 : 1.25 153.65 9.60 0.00 0.00 363822.65 24175.50 431857.97 00:24:45.000 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:45.000 Verification LBA range: start 0x0 length 0x400 00:24:45.000 Nvme10n1 : 1.26 152.91 9.56 0.00 0.00 359815.14 22913.33 441178.64 00:24:45.000 =================================================================================================================== 00:24:45.000 Total : 1464.02 91.50 0.00 0.00 402329.82 22913.33 472247.56 00:24:45.258 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:24:45.258 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:45.258 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:45.258 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:45.258 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.259 rmmod nvme_tcp 00:24:45.259 rmmod nvme_fabrics 00:24:45.259 rmmod nvme_keyring 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 412835 ']' 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 412835 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 412835 ']' 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 412835 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 412835 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 412835' 00:24:45.259 killing process with pid 412835 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 412835 00:24:45.259 [2024-05-13 03:05:35.911046] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:45.259 03:05:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 412835 00:24:45.826 03:05:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.826 03:05:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.826 03:05:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.826 03:05:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.826 03:05:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.826 03:05:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.826 03:05:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.826 03:05:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.733 00:24:47.733 real 0m11.684s 00:24:47.733 user 0m33.725s 00:24:47.733 sys 0m3.082s 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:47.733 ************************************ 00:24:47.733 END TEST nvmf_shutdown_tc1 00:24:47.733 ************************************ 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:47.733 ************************************ 00:24:47.733 START TEST nvmf_shutdown_tc2 00:24:47.733 ************************************ 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:47.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:47.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:47.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:47.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:47.733 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.734 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:47.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:24:47.991 00:24:47.991 --- 10.0.0.2 ping statistics --- 00:24:47.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.991 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:47.991 00:24:47.991 --- 10.0.0.1 ping statistics --- 00:24:47.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.991 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=414089 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 414089 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 414089 ']' 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:47.991 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.991 [2024-05-13 03:05:38.680765] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:24:47.991 [2024-05-13 03:05:38.680834] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.991 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.991 [2024-05-13 03:05:38.716668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:47.991 [2024-05-13 03:05:38.746912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:48.250 [2024-05-13 03:05:38.841487] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.250 [2024-05-13 03:05:38.841531] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.250 [2024-05-13 03:05:38.841556] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.250 [2024-05-13 03:05:38.841569] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.250 [2024-05-13 03:05:38.841579] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.250 [2024-05-13 03:05:38.841655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.250 [2024-05-13 03:05:38.841802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.250 [2024-05-13 03:05:38.841835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:48.250 [2024-05-13 03:05:38.841838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.250 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:48.250 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:24:48.250 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:48.250 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:48.250 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:48.250 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.250 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:48.250 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.250 03:05:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:48.250 [2024-05-13 03:05:38.997581] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.250 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:48.508 Malloc1 00:24:48.508 [2024-05-13 03:05:39.087149] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:48.508 [2024-05-13 03:05:39.087439] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.508 Malloc2 00:24:48.508 Malloc3 00:24:48.508 Malloc4 00:24:48.508 Malloc5 00:24:48.508 Malloc6 00:24:48.767 Malloc7 00:24:48.767 Malloc8 00:24:48.767 Malloc9 00:24:48.767 Malloc10 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=414263 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 414263 /var/tmp/bdevperf.sock 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 414263 ']' 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:48.767 { 00:24:48.767 "params": { 00:24:48.767 "name": "Nvme$subsystem", 00:24:48.767 "trtype": "$TEST_TRANSPORT", 00:24:48.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.767 "adrfam": "ipv4", 00:24:48.767 "trsvcid": "$NVMF_PORT", 00:24:48.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.767 "hdgst": ${hdgst:-false}, 00:24:48.767 "ddgst": ${ddgst:-false} 00:24:48.767 }, 00:24:48.767 "method": "bdev_nvme_attach_controller" 00:24:48.767 } 00:24:48.767 EOF 00:24:48.767 )") 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:48.767 { 00:24:48.767 "params": { 00:24:48.767 "name": "Nvme$subsystem", 00:24:48.767 "trtype": "$TEST_TRANSPORT", 00:24:48.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.767 "adrfam": "ipv4", 00:24:48.767 "trsvcid": "$NVMF_PORT", 00:24:48.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.767 "hdgst": ${hdgst:-false}, 00:24:48.767 "ddgst": ${ddgst:-false} 00:24:48.767 }, 00:24:48.767 "method": "bdev_nvme_attach_controller" 00:24:48.767 } 00:24:48.767 EOF 00:24:48.767 )") 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:48.767 { 00:24:48.767 "params": { 00:24:48.767 "name": "Nvme$subsystem", 00:24:48.767 "trtype": "$TEST_TRANSPORT", 00:24:48.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.767 "adrfam": "ipv4", 00:24:48.767 "trsvcid": "$NVMF_PORT", 00:24:48.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.767 "hdgst": ${hdgst:-false}, 00:24:48.767 "ddgst": ${ddgst:-false} 00:24:48.767 }, 00:24:48.767 "method": "bdev_nvme_attach_controller" 00:24:48.767 } 00:24:48.767 EOF 00:24:48.767 )") 00:24:48.767 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.026 { 00:24:49.026 "params": { 00:24:49.026 "name": "Nvme$subsystem", 00:24:49.026 "trtype": "$TEST_TRANSPORT", 00:24:49.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.026 "adrfam": "ipv4", 00:24:49.026 "trsvcid": "$NVMF_PORT", 00:24:49.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.026 "hdgst": ${hdgst:-false}, 00:24:49.026 "ddgst": ${ddgst:-false} 00:24:49.026 }, 00:24:49.026 "method": "bdev_nvme_attach_controller" 00:24:49.026 } 00:24:49.026 EOF 00:24:49.026 )") 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.026 { 00:24:49.026 "params": { 00:24:49.026 "name": "Nvme$subsystem", 00:24:49.026 "trtype": "$TEST_TRANSPORT", 00:24:49.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.026 "adrfam": "ipv4", 00:24:49.026 "trsvcid": "$NVMF_PORT", 00:24:49.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.026 "hdgst": ${hdgst:-false}, 00:24:49.026 "ddgst": ${ddgst:-false} 00:24:49.026 }, 00:24:49.026 "method": "bdev_nvme_attach_controller" 00:24:49.026 } 00:24:49.026 EOF 00:24:49.026 )") 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.026 { 00:24:49.026 "params": { 00:24:49.026 "name": "Nvme$subsystem", 00:24:49.026 "trtype": "$TEST_TRANSPORT", 00:24:49.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.026 "adrfam": "ipv4", 00:24:49.026 "trsvcid": "$NVMF_PORT", 00:24:49.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.026 "hdgst": ${hdgst:-false}, 00:24:49.026 "ddgst": ${ddgst:-false} 00:24:49.026 }, 00:24:49.026 "method": "bdev_nvme_attach_controller" 00:24:49.026 } 00:24:49.026 EOF 00:24:49.026 )") 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.026 { 00:24:49.026 "params": { 00:24:49.026 "name": "Nvme$subsystem", 00:24:49.026 "trtype": "$TEST_TRANSPORT", 00:24:49.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.026 "adrfam": "ipv4", 00:24:49.026 "trsvcid": "$NVMF_PORT", 00:24:49.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.026 "hdgst": ${hdgst:-false}, 00:24:49.026 "ddgst": ${ddgst:-false} 00:24:49.026 }, 00:24:49.026 "method": "bdev_nvme_attach_controller" 00:24:49.026 } 00:24:49.026 EOF 00:24:49.026 )") 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.026 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.027 { 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme$subsystem", 00:24:49.027 "trtype": "$TEST_TRANSPORT", 00:24:49.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "$NVMF_PORT", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.027 "hdgst": ${hdgst:-false}, 00:24:49.027 "ddgst": ${ddgst:-false} 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 } 00:24:49.027 EOF 00:24:49.027 )") 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.027 { 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme$subsystem", 00:24:49.027 "trtype": "$TEST_TRANSPORT", 00:24:49.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "$NVMF_PORT", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.027 "hdgst": ${hdgst:-false}, 00:24:49.027 "ddgst": ${ddgst:-false} 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 } 00:24:49.027 EOF 00:24:49.027 )") 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.027 { 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme$subsystem", 00:24:49.027 "trtype": "$TEST_TRANSPORT", 00:24:49.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "$NVMF_PORT", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.027 "hdgst": ${hdgst:-false}, 00:24:49.027 "ddgst": ${ddgst:-false} 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 } 00:24:49.027 EOF 00:24:49.027 )") 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:49.027 03:05:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme1", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 },{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme2", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 },{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme3", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 },{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme4", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 },{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme5", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 },{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme6", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 },{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme7", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 },{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme8", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 },{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme9", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 },{ 00:24:49.027 "params": { 00:24:49.027 "name": "Nvme10", 00:24:49.027 "trtype": "tcp", 00:24:49.027 "traddr": "10.0.0.2", 00:24:49.027 "adrfam": "ipv4", 00:24:49.027 "trsvcid": "4420", 00:24:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:49.027 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:49.027 "hdgst": false, 00:24:49.027 "ddgst": false 00:24:49.027 }, 00:24:49.027 "method": "bdev_nvme_attach_controller" 00:24:49.027 }' 00:24:49.027 [2024-05-13 03:05:39.602378] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:24:49.027 [2024-05-13 03:05:39.602456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414263 ] 00:24:49.027 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.027 [2024-05-13 03:05:39.639503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:49.027 [2024-05-13 03:05:39.670196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.027 [2024-05-13 03:05:39.757512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.930 Running I/O for 10 seconds... 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:50.930 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:51.188 03:05:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 414263 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 414263 ']' 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 414263 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 414263 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 414263' 00:24:51.448 killing process with pid 414263 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 414263 00:24:51.448 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 414263 00:24:51.730 Received shutdown signal, test time was about 0.960305 seconds 00:24:51.730 00:24:51.730 Latency(us) 00:24:51.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.730 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme1n1 : 0.94 203.73 12.73 0.00 0.00 310231.04 24369.68 326223.64 00:24:51.730 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme2n1 : 0.96 133.87 8.37 0.00 0.00 463603.48 38059.43 493995.80 00:24:51.730 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme3n1 : 0.96 333.52 20.84 0.00 0.00 182309.02 20097.71 183306.62 00:24:51.730 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme4n1 : 0.89 287.29 17.96 0.00 0.00 206061.61 20874.43 203501.42 00:24:51.730 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme5n1 : 0.93 276.30 17.27 0.00 0.00 209861.97 24078.41 237677.23 00:24:51.730 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme6n1 : 0.93 138.36 8.65 0.00 0.00 410924.37 75730.49 357292.56 00:24:51.730 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme7n1 : 0.93 207.03 12.94 0.00 0.00 265576.04 22039.51 259425.47 00:24:51.730 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme8n1 : 0.95 269.42 16.84 0.00 0.00 202637.08 28738.75 242337.56 00:24:51.730 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme9n1 : 0.95 201.32 12.58 0.00 0.00 265528.51 28156.21 310689.19 00:24:51.730 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.730 Verification LBA range: start 0x0 length 0x400 00:24:51.730 Nvme10n1 : 0.93 137.27 8.58 0.00 0.00 378995.29 42913.94 410109.72 00:24:51.730 =================================================================================================================== 00:24:51.730 Total : 2188.10 136.76 0.00 0.00 263026.28 20097.71 493995.80 00:24:51.989 03:05:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:52.921 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 414089 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.922 rmmod nvme_tcp 00:24:52.922 rmmod nvme_fabrics 00:24:52.922 rmmod nvme_keyring 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 414089 ']' 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 414089 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 414089 ']' 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 414089 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 414089 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 414089' 00:24:52.922 killing process with pid 414089 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 414089 00:24:52.922 [2024-05-13 03:05:43.644409] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:52.922 03:05:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 414089 00:24:53.486 03:05:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:53.486 03:05:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:53.486 03:05:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:53.486 03:05:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.486 03:05:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:53.486 03:05:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.486 03:05:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.486 03:05:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.390 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:55.390 00:24:55.390 real 0m7.678s 00:24:55.390 user 0m22.991s 00:24:55.390 sys 0m1.614s 00:24:55.390 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:55.390 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.390 ************************************ 00:24:55.390 END TEST nvmf_shutdown_tc2 00:24:55.390 ************************************ 00:24:55.390 03:05:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:55.390 03:05:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:55.390 03:05:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:55.390 03:05:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:55.649 ************************************ 00:24:55.649 START TEST nvmf_shutdown_tc3 00:24:55.649 ************************************ 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:55.649 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.649 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:55.649 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:55.650 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:55.650 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:24:55.650 00:24:55.650 --- 10.0.0.2 ping statistics --- 00:24:55.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.650 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:55.650 00:24:55.650 --- 10.0.0.1 ping statistics --- 00:24:55.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.650 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=415180 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 415180 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 415180 ']' 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:55.650 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:55.650 [2024-05-13 03:05:46.443691] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:24:55.650 [2024-05-13 03:05:46.443777] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.909 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.909 [2024-05-13 03:05:46.482934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:55.909 [2024-05-13 03:05:46.509627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.909 [2024-05-13 03:05:46.599166] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.909 [2024-05-13 03:05:46.599215] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.909 [2024-05-13 03:05:46.599228] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.909 [2024-05-13 03:05:46.599238] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.909 [2024-05-13 03:05:46.599247] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.909 [2024-05-13 03:05:46.599328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.909 [2024-05-13 03:05:46.599393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.909 [2024-05-13 03:05:46.599458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:55.909 [2024-05-13 03:05:46.599460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:56.167 [2024-05-13 03:05:46.756419] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.167 03:05:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:56.167 Malloc1 00:24:56.167 [2024-05-13 03:05:46.840162] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:56.167 [2024-05-13 03:05:46.840428] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.167 Malloc2 00:24:56.167 Malloc3 00:24:56.167 Malloc4 00:24:56.425 Malloc5 00:24:56.425 Malloc6 00:24:56.425 Malloc7 00:24:56.425 Malloc8 00:24:56.425 Malloc9 00:24:56.683 Malloc10 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=415360 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 415360 /var/tmp/bdevperf.sock 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 415360 ']' 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.683 { 00:24:56.683 "params": { 00:24:56.683 "name": "Nvme$subsystem", 00:24:56.683 "trtype": "$TEST_TRANSPORT", 00:24:56.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.683 "adrfam": "ipv4", 00:24:56.683 "trsvcid": "$NVMF_PORT", 00:24:56.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.683 "hdgst": ${hdgst:-false}, 00:24:56.683 "ddgst": ${ddgst:-false} 00:24:56.683 }, 00:24:56.683 "method": "bdev_nvme_attach_controller" 00:24:56.683 } 00:24:56.683 EOF 00:24:56.683 )") 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.683 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.683 { 00:24:56.683 "params": { 00:24:56.683 "name": "Nvme$subsystem", 00:24:56.683 "trtype": "$TEST_TRANSPORT", 00:24:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "$NVMF_PORT", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.684 "hdgst": ${hdgst:-false}, 00:24:56.684 "ddgst": ${ddgst:-false} 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 } 00:24:56.684 EOF 00:24:56.684 )") 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.684 { 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme$subsystem", 00:24:56.684 "trtype": "$TEST_TRANSPORT", 00:24:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "$NVMF_PORT", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.684 "hdgst": ${hdgst:-false}, 00:24:56.684 "ddgst": ${ddgst:-false} 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 } 00:24:56.684 EOF 00:24:56.684 )") 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.684 { 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme$subsystem", 00:24:56.684 "trtype": "$TEST_TRANSPORT", 00:24:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "$NVMF_PORT", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.684 "hdgst": ${hdgst:-false}, 00:24:56.684 "ddgst": ${ddgst:-false} 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 } 00:24:56.684 EOF 00:24:56.684 )") 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.684 { 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme$subsystem", 00:24:56.684 "trtype": "$TEST_TRANSPORT", 00:24:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "$NVMF_PORT", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.684 "hdgst": ${hdgst:-false}, 00:24:56.684 "ddgst": ${ddgst:-false} 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 } 00:24:56.684 EOF 00:24:56.684 )") 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.684 { 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme$subsystem", 00:24:56.684 "trtype": "$TEST_TRANSPORT", 00:24:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "$NVMF_PORT", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.684 "hdgst": ${hdgst:-false}, 00:24:56.684 "ddgst": ${ddgst:-false} 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 } 00:24:56.684 EOF 00:24:56.684 )") 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.684 { 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme$subsystem", 00:24:56.684 "trtype": "$TEST_TRANSPORT", 00:24:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "$NVMF_PORT", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.684 "hdgst": ${hdgst:-false}, 00:24:56.684 "ddgst": ${ddgst:-false} 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 } 00:24:56.684 EOF 00:24:56.684 )") 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.684 { 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme$subsystem", 00:24:56.684 "trtype": "$TEST_TRANSPORT", 00:24:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "$NVMF_PORT", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.684 "hdgst": ${hdgst:-false}, 00:24:56.684 "ddgst": ${ddgst:-false} 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 } 00:24:56.684 EOF 00:24:56.684 )") 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.684 { 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme$subsystem", 00:24:56.684 "trtype": "$TEST_TRANSPORT", 00:24:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "$NVMF_PORT", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.684 "hdgst": ${hdgst:-false}, 00:24:56.684 "ddgst": ${ddgst:-false} 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 } 00:24:56.684 EOF 00:24:56.684 )") 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.684 { 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme$subsystem", 00:24:56.684 "trtype": "$TEST_TRANSPORT", 00:24:56.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "$NVMF_PORT", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.684 "hdgst": ${hdgst:-false}, 00:24:56.684 "ddgst": ${ddgst:-false} 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 } 00:24:56.684 EOF 00:24:56.684 )") 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:56.684 03:05:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme1", 00:24:56.684 "trtype": "tcp", 00:24:56.684 "traddr": "10.0.0.2", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "4420", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.684 "hdgst": false, 00:24:56.684 "ddgst": false 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 },{ 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme2", 00:24:56.684 "trtype": "tcp", 00:24:56.684 "traddr": "10.0.0.2", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "4420", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:56.684 "hdgst": false, 00:24:56.684 "ddgst": false 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 },{ 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme3", 00:24:56.684 "trtype": "tcp", 00:24:56.684 "traddr": "10.0.0.2", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "4420", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:56.684 "hdgst": false, 00:24:56.684 "ddgst": false 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 },{ 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme4", 00:24:56.684 "trtype": "tcp", 00:24:56.684 "traddr": "10.0.0.2", 00:24:56.684 "adrfam": "ipv4", 00:24:56.684 "trsvcid": "4420", 00:24:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:56.684 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:56.684 "hdgst": false, 00:24:56.684 "ddgst": false 00:24:56.684 }, 00:24:56.684 "method": "bdev_nvme_attach_controller" 00:24:56.684 },{ 00:24:56.684 "params": { 00:24:56.684 "name": "Nvme5", 00:24:56.684 "trtype": "tcp", 00:24:56.684 "traddr": "10.0.0.2", 00:24:56.684 "adrfam": "ipv4", 00:24:56.685 "trsvcid": "4420", 00:24:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:56.685 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:56.685 "hdgst": false, 00:24:56.685 "ddgst": false 00:24:56.685 }, 00:24:56.685 "method": "bdev_nvme_attach_controller" 00:24:56.685 },{ 00:24:56.685 "params": { 00:24:56.685 "name": "Nvme6", 00:24:56.685 "trtype": "tcp", 00:24:56.685 "traddr": "10.0.0.2", 00:24:56.685 "adrfam": "ipv4", 00:24:56.685 "trsvcid": "4420", 00:24:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:56.685 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:56.685 "hdgst": false, 00:24:56.685 "ddgst": false 00:24:56.685 }, 00:24:56.685 "method": "bdev_nvme_attach_controller" 00:24:56.685 },{ 00:24:56.685 "params": { 00:24:56.685 "name": "Nvme7", 00:24:56.685 "trtype": "tcp", 00:24:56.685 "traddr": "10.0.0.2", 00:24:56.685 "adrfam": "ipv4", 00:24:56.685 "trsvcid": "4420", 00:24:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:56.685 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:56.685 "hdgst": false, 00:24:56.685 "ddgst": false 00:24:56.685 }, 00:24:56.685 "method": "bdev_nvme_attach_controller" 00:24:56.685 },{ 00:24:56.685 "params": { 00:24:56.685 "name": "Nvme8", 00:24:56.685 "trtype": "tcp", 00:24:56.685 "traddr": "10.0.0.2", 00:24:56.685 "adrfam": "ipv4", 00:24:56.685 "trsvcid": "4420", 00:24:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:56.685 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:56.685 "hdgst": false, 00:24:56.685 "ddgst": false 00:24:56.685 }, 00:24:56.685 "method": "bdev_nvme_attach_controller" 00:24:56.685 },{ 00:24:56.685 "params": { 00:24:56.685 "name": "Nvme9", 00:24:56.685 "trtype": "tcp", 00:24:56.685 "traddr": "10.0.0.2", 00:24:56.685 "adrfam": "ipv4", 00:24:56.685 "trsvcid": "4420", 00:24:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:56.685 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:56.685 "hdgst": false, 00:24:56.685 "ddgst": false 00:24:56.685 }, 00:24:56.685 "method": "bdev_nvme_attach_controller" 00:24:56.685 },{ 00:24:56.685 "params": { 00:24:56.685 "name": "Nvme10", 00:24:56.685 "trtype": "tcp", 00:24:56.685 "traddr": "10.0.0.2", 00:24:56.685 "adrfam": "ipv4", 00:24:56.685 "trsvcid": "4420", 00:24:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:56.685 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:56.685 "hdgst": false, 00:24:56.685 "ddgst": false 00:24:56.685 }, 00:24:56.685 "method": "bdev_nvme_attach_controller" 00:24:56.685 }' 00:24:56.685 [2024-05-13 03:05:47.333835] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:24:56.685 [2024-05-13 03:05:47.333918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415360 ] 00:24:56.685 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.685 [2024-05-13 03:05:47.369812] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:56.685 [2024-05-13 03:05:47.399085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.943 [2024-05-13 03:05:47.485936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.841 Running I/O for 10 seconds... 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:58.841 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:59.100 03:05:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 415180 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 415180 ']' 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 415180 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 415180 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 415180' 00:24:59.369 killing process with pid 415180 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 415180 00:24:59.369 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 415180 00:24:59.369 [2024-05-13 03:05:50.114361] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:59.369 [2024-05-13 03:05:50.115035] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115071] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115087] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115101] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115113] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115126] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115138] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115151] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115163] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115176] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115188] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115201] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115214] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115227] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115240] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115252] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115265] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115277] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115308] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115322] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115335] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115347] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115360] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115374] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115387] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115401] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115413] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115425] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115439] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115452] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115465] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115477] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115490] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115503] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115516] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115529] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115542] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115554] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115567] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115582] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115595] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115609] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115622] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115636] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115649] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115662] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115678] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115691] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115713] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115727] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115740] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115756] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115768] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115781] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115794] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115807] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115820] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115833] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115846] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115859] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115871] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115886] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.115899] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae000 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118108] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118142] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118160] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118174] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118186] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118199] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118215] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118229] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118241] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118255] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118275] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118289] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118301] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118316] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118329] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118341] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118356] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118370] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118382] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118396] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118411] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118424] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118436] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118449] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.370 [2024-05-13 03:05:50.118464] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118477] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118489] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118501] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118516] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118529] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118541] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118553] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118568] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118581] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118594] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118609] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118622] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118639] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118652] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118679] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118694] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118716] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118735] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118748] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118762] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118776] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118799] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118815] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118828] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118840] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118853] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118867] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118880] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118892] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118905] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118919] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118931] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118943] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118955] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118970] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118987] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.118999] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.119013] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae4a0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.119843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.119890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.119909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.119924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.119937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.119951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.119965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.119983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.119996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da65c0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.120083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.120113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.120128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.120142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.120156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.120170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.120184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.120197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.120210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e730 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.120271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.120292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.120307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.120321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.120336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.120351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.120366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.371 [2024-05-13 03:05:50.120380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.120398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cc0 is same with the state(5) to be set 00:24:59.371 [2024-05-13 03:05:50.121896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.371 [2024-05-13 03:05:50.121926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.121953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.371 [2024-05-13 03:05:50.121969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.121989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.371 [2024-05-13 03:05:50.122004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.122020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.371 [2024-05-13 03:05:50.122035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.122052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.371 [2024-05-13 03:05:50.122066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.122082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.371 [2024-05-13 03:05:50.122095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.122112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.371 [2024-05-13 03:05:50.122126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.122141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.371 [2024-05-13 03:05:50.122155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.122171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.371 [2024-05-13 03:05:50.122185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.371 [2024-05-13 03:05:50.122201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.122963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.122993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.372 [2024-05-13 03:05:50.123449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.372 [2024-05-13 03:05:50.123465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.123927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.123941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e344c0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124467] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e344c0 was disconnected and freed. reset controller. 00:24:59.373 [2024-05-13 03:05:50.124515] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124547] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124562] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124575] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124588] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124601] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124614] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124626] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124639] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124652] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124665] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124677] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124690] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.124727] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.124754] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124767] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with [2024-05-13 03:05:50.124766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:1the state(5) to be set 00:24:59.373 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.124781] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.124794] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124807] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with [2024-05-13 03:05:50.124806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:1the state(5) to be set 00:24:59.373 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.124822] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.124835] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.124847] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.124860] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:1[2024-05-13 03:05:50.124873] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124888] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with [2024-05-13 03:05:50.124888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.373 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.124903] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.124915] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.124928] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.124941] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.124955] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.124968] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with [2024-05-13 03:05:50.124968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:1the state(5) to be set 00:24:59.373 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.373 [2024-05-13 03:05:50.124982] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with [2024-05-13 03:05:50.124984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.373 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.373 [2024-05-13 03:05:50.125010] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.373 [2024-05-13 03:05:50.125013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125024] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125037] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125050] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125062] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125075] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with [2024-05-13 03:05:50.125075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:1the state(5) to be set 00:24:59.374 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125089] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125102] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125115] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125128] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:1[2024-05-13 03:05:50.125141] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125155] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with [2024-05-13 03:05:50.125156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.374 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125170] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125182] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125199] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125212] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125225] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1[2024-05-13 03:05:50.125238] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125253] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with [2024-05-13 03:05:50.125253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.374 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125268] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125281] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125293] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125306] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.125320] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125333] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125355] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:1[2024-05-13 03:05:50.125368] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125382] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with [2024-05-13 03:05:50.125382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.374 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125399] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125412] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125424] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aede0 is same with the state(5) to be set 00:24:59.374 [2024-05-13 03:05:50.125434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.374 [2024-05-13 03:05:50.125523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.374 [2024-05-13 03:05:50.125536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.125962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.125976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126248] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126277] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126292] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126305] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126329] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126342] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126356] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126369] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126382] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126395] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with [2024-05-13 03:05:50.126396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:12the state(5) to be set 00:24:59.375 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126420] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with [2024-05-13 03:05:50.126421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.375 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126435] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126448] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126462] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126474] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126487] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126501] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126513] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126526] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.375 [2024-05-13 03:05:50.126554] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.375 [2024-05-13 03:05:50.126563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.375 [2024-05-13 03:05:50.126566] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126579] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with [2024-05-13 03:05:50.126578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12the state(5) to be set 00:24:59.376 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.376 [2024-05-13 03:05:50.126593] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.126606] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.376 [2024-05-13 03:05:50.126621] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.126634] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.376 [2024-05-13 03:05:50.126646] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.126659] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12[2024-05-13 03:05:50.126707] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.376 the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.126727] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126742] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.376 [2024-05-13 03:05:50.126755] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.126769] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.376 [2024-05-13 03:05:50.126782] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.126795] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126808] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with [2024-05-13 03:05:50.126807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12the state(5) to be set 00:24:59.376 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.376 [2024-05-13 03:05:50.126822] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.126835] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.376 [2024-05-13 03:05:50.126848] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.126861] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126876] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126889] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126902] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126914] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126926] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126938] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126941] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d95fc0 was disconnected and freed. reset controller. 00:24:59.376 [2024-05-13 03:05:50.126951] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126963] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126976] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.126989] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127004] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127016] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127028] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127041] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127053] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127077] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127089] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127102] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127115] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127133] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127146] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127159] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.127175] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af280 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.128929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:59.376 [2024-05-13 03:05:50.128972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5e730 (9): Bad file descriptor 00:24:59.376 [2024-05-13 03:05:50.131219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:59.376 [2024-05-13 03:05:50.131294] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd0370 (9): Bad file descriptor 00:24:59.376 [2024-05-13 03:05:50.131346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da65c0 (9): Bad file descriptor 00:24:59.376 [2024-05-13 03:05:50.131422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.376 [2024-05-13 03:05:50.131565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.131585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.376 [2024-05-13 03:05:50.131599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.131613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.376 [2024-05-13 03:05:50.131627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.131641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.376 [2024-05-13 03:05:50.131654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.131668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2c20 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.131703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f70cc0 (9): Bad file descriptor 00:24:59.376 [2024-05-13 03:05:50.131769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.376 [2024-05-13 03:05:50.131790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.131806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.376 [2024-05-13 03:05:50.131819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.131833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.376 [2024-05-13 03:05:50.131848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.131862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.376 [2024-05-13 03:05:50.131876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.376 [2024-05-13 03:05:50.131889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c640 is same with the state(5) to be set 00:24:59.376 [2024-05-13 03:05:50.133388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.376 [2024-05-13 03:05:50.134667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.376 [2024-05-13 03:05:50.134722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5e730 with addr=10.0.0.2, port=4420 00:24:59.377 [2024-05-13 03:05:50.134747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e730 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.134820] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:59.377 [2024-05-13 03:05:50.134894] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:59.377 [2024-05-13 03:05:50.135375] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135406] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135420] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135433] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135445] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135467] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135482] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135495] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135508] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135520] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135532] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135544] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135574] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135591] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135603] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135616] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135655] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135670] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135724] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135765] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135781] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135794] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135807] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135819] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135845] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135862] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135875] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135888] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135900] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135914] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135926] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135939] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135952] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135964] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135977] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.135980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.377 [2024-05-13 03:05:50.136001] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136014] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136026] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136038] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136051] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136063] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136075] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136087] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136100] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136112] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136125] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136173] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136187] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136211] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136225] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with [2024-05-13 03:05:50.136220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.377 the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136246] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with [2024-05-13 03:05:50.136253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd0370 with addr=10.0.0.2, port=4420 00:24:59.377 [2024-05-13 03:05:50.136271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0370 is same with the state(5) to be set 00:24:59.377 the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136289] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with [2024-05-13 03:05:50.136291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5e730 (9): the state(5) to be set 00:24:59.377 Bad file descriptor 00:24:59.377 [2024-05-13 03:05:50.136306] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136319] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136336] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136349] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136361] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136373] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136388] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136400] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136406] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:59.377 [2024-05-13 03:05:50.136415] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136428] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136440] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af740 is same with the state(5) to be set 00:24:59.377 [2024-05-13 03:05:50.136902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd0370 (9): Bad file descriptor 00:24:59.377 [2024-05-13 03:05:50.136929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:59.377 [2024-05-13 03:05:50.136944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:59.377 [2024-05-13 03:05:50.136966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:59.377 [2024-05-13 03:05:50.137308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.377 [2024-05-13 03:05:50.137333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:59.377 [2024-05-13 03:05:50.137348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:59.377 [2024-05-13 03:05:50.137362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:59.377 [2024-05-13 03:05:50.137731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.377 [2024-05-13 03:05:50.137880] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:59.377 [2024-05-13 03:05:50.138162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.377 [2024-05-13 03:05:50.138188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.377 [2024-05-13 03:05:50.138218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.377 [2024-05-13 03:05:50.138236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.377 [2024-05-13 03:05:50.138253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.377 [2024-05-13 03:05:50.138268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.377 [2024-05-13 03:05:50.138285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.138972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.138986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.378 [2024-05-13 03:05:50.139352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.378 [2024-05-13 03:05:50.139366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.139980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.139999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.140031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.140046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.140062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.140081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.140097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.140110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.140126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.140140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.140155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.140169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.140185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.140203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.140218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f26ae0 is same with the state(5) to be set 00:24:59.379 [2024-05-13 03:05:50.140303] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f26ae0 was disconnected and freed. reset controller. 00:24:59.379 [2024-05-13 03:05:50.141758] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:59.379 [2024-05-13 03:05:50.141902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:59.379 [2024-05-13 03:05:50.141935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2c20 (9): Bad file descriptor 00:24:59.379 [2024-05-13 03:05:50.141997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.379 [2024-05-13 03:05:50.142019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.379 [2024-05-13 03:05:50.142059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.379 [2024-05-13 03:05:50.142088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.379 [2024-05-13 03:05:50.142117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61cb0 is same with the state(5) to be set 00:24:59.379 [2024-05-13 03:05:50.142188] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c640 (9): Bad file descriptor 00:24:59.379 [2024-05-13 03:05:50.142240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.379 [2024-05-13 03:05:50.142261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.379 [2024-05-13 03:05:50.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.379 [2024-05-13 03:05:50.142329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.379 [2024-05-13 03:05:50.142368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f479e0 is same with the state(5) to be set 00:24:59.379 [2024-05-13 03:05:50.142512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.142541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142547] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.379 [2024-05-13 03:05:50.142573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.379 [2024-05-13 03:05:50.142584] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.379 [2024-05-13 03:05:50.142590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.379 [2024-05-13 03:05:50.142599] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142612] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.142625] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142639] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142652] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.142665] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142678] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.142691] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142716] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142729] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.142749] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142762] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.142785] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142799] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.142812] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142826] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.142839] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142852] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142865] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.142878] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142891] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.142904] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142917] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with [2024-05-13 03:05:50.142918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12the state(5) to be set 00:24:59.380 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142932] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.142944] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.142957] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.142970] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.142997] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.143009] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.143021] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.143035] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.143049] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143062] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.143074] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.143087] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.143100] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.143113] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143126] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.143138] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.143151] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.143163] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.143180] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12[2024-05-13 03:05:50.143194] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143208] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with [2024-05-13 03:05:50.143208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.380 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.143222] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.380 [2024-05-13 03:05:50.143235] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.380 [2024-05-13 03:05:50.143242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.380 [2024-05-13 03:05:50.143248] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:12[2024-05-13 03:05:50.143260] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143274] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with [2024-05-13 03:05:50.143275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.381 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143295] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143308] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143320] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143332] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with [2024-05-13 03:05:50.143331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12the state(5) to be set 00:24:59.381 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143346] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143359] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143371] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.143385] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143400] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143413] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143426] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143438] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169a720 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.143449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.143963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.143978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.144018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.144035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.144048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.144064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.144077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.144093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.144107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.144122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.144156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.144170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.144185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.144190] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with [2024-05-13 03:05:50.144199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.381 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.144219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:12[2024-05-13 03:05:50.144219] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.144234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.144235] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.144250] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.144252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.144262] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.144266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.144275] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.144283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.381 [2024-05-13 03:05:50.144288] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.381 [2024-05-13 03:05:50.144311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.381 [2024-05-13 03:05:50.144315] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:12[2024-05-13 03:05:50.144329] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.144344] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144360] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144372] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144393] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144407] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144421] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144434] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144447] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144460] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with [2024-05-13 03:05:50.144461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:12the state(5) to be set 00:24:59.382 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144475] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with [2024-05-13 03:05:50.144476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:59.382 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144492] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144505] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144518] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144532] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 03:05:50.144545] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144559] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144572] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144585] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144598] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with [2024-05-13 03:05:50.144597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:12the state(5) to be set 00:24:59.382 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144612] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144626] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144653] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144667] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144680] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144693] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144734] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144753] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with [2024-05-13 03:05:50.144752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:12the state(5) to be set 00:24:59.382 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.382 [2024-05-13 03:05:50.144767] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.382 [2024-05-13 03:05:50.144780] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f29490 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144793] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144806] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144818] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144830] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144843] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144851] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f29490 was disconnected and freed. reset controller. 00:24:59.382 [2024-05-13 03:05:50.144860] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144874] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144886] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144899] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144911] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144923] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144941] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144954] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144967] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.144979] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.145001] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.145013] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.145025] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.145037] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.382 [2024-05-13 03:05:50.145049] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.383 [2024-05-13 03:05:50.145068] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.383 [2024-05-13 03:05:50.145080] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.383 [2024-05-13 03:05:50.145087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:12[2024-05-13 03:05:50.145093] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 the state(5) to be set 00:24:59.383 [2024-05-13 03:05:50.145110] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.383 [2024-05-13 03:05:50.145112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145122] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169abc0 is same with the state(5) to be set 00:24:59.383 [2024-05-13 03:05:50.145133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.145966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.145991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.146011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.146026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.146053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.146067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.146084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.146098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.146114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.146127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.146144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.146158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.146174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.146188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.146204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.146217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.383 [2024-05-13 03:05:50.146234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.383 [2024-05-13 03:05:50.146248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.146971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.146996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.147028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.147042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.147057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.147071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.147086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.147100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.147115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.147134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.147151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.147164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.147180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.147193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.147209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.147222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.147237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3b730 is same with the state(5) to be set 00:24:59.384 [2024-05-13 03:05:50.148468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.384 [2024-05-13 03:05:50.148858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.384 [2024-05-13 03:05:50.148875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.148889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.148906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.148923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.148941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.148956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.148972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.148986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.149926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.385 [2024-05-13 03:05:50.149942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.385 [2024-05-13 03:05:50.160420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.160974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.160991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.161005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.161021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.651 [2024-05-13 03:05:50.161052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.161068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0f910 is same with the state(5) to be set 00:24:59.651 [2024-05-13 03:05:50.164008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:59.651 [2024-05-13 03:05:50.164064] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.651 [2024-05-13 03:05:50.164086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:59.651 [2024-05-13 03:05:50.164103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:59.651 [2024-05-13 03:05:50.164133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f479e0 (9): Bad file descriptor 00:24:59.651 [2024-05-13 03:05:50.164436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-05-13 03:05:50.164668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-05-13 03:05:50.164703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de2c20 with addr=10.0.0.2, port=4420 00:24:59.651 [2024-05-13 03:05:50.164724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2c20 is same with the state(5) to be set 00:24:59.651 [2024-05-13 03:05:50.164791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f61cb0 (9): Bad file descriptor 00:24:59.651 [2024-05-13 03:05:50.164855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.651 [2024-05-13 03:05:50.164879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.164897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.651 [2024-05-13 03:05:50.164911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.164926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.651 [2024-05-13 03:05:50.164940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.164955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.651 [2024-05-13 03:05:50.164969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.164988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee370 is same with the state(5) to be set 00:24:59.651 [2024-05-13 03:05:50.165061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.651 [2024-05-13 03:05:50.165083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.165099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.651 [2024-05-13 03:05:50.165113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.165128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.651 [2024-05-13 03:05:50.165141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.165162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.651 [2024-05-13 03:05:50.165176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.651 [2024-05-13 03:05:50.165189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee8d0 is same with the state(5) to be set 00:24:59.651 [2024-05-13 03:05:50.165218] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:59.651 [2024-05-13 03:05:50.165242] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2c20 (9): Bad file descriptor 00:24:59.651 [2024-05-13 03:05:50.165458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:59.651 [2024-05-13 03:05:50.165702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-05-13 03:05:50.165913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-05-13 03:05:50.165941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5e730 with addr=10.0.0.2, port=4420 00:24:59.652 [2024-05-13 03:05:50.165958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e730 is same with the state(5) to be set 00:24:59.652 [2024-05-13 03:05:50.166145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-05-13 03:05:50.166343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-05-13 03:05:50.166370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da65c0 with addr=10.0.0.2, port=4420 00:24:59.652 [2024-05-13 03:05:50.166386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da65c0 is same with the state(5) to be set 00:24:59.652 [2024-05-13 03:05:50.166570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-05-13 03:05:50.166755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-05-13 03:05:50.166781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f70cc0 with addr=10.0.0.2, port=4420 00:24:59.652 [2024-05-13 03:05:50.166797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cc0 is same with the state(5) to be set 00:24:59.652 [2024-05-13 03:05:50.167386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.167962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.167983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.652 [2024-05-13 03:05:50.168559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.652 [2024-05-13 03:05:50.168573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.168963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.168990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.653 [2024-05-13 03:05:50.169440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.653 [2024-05-13 03:05:50.169455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da23b0 is same with the state(5) to be set 00:24:59.653 [2024-05-13 03:05:50.171182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:59.653 [2024-05-13 03:05:50.171450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-05-13 03:05:50.171671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-05-13 03:05:50.171704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f479e0 with addr=10.0.0.2, port=4420 00:24:59.653 [2024-05-13 03:05:50.171723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f479e0 is same with the state(5) to be set 00:24:59.653 [2024-05-13 03:05:50.171928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-05-13 03:05:50.172159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-05-13 03:05:50.172184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd0370 with addr=10.0.0.2, port=4420 00:24:59.653 [2024-05-13 03:05:50.172200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0370 is same with the state(5) to be set 00:24:59.653 [2024-05-13 03:05:50.172229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5e730 (9): Bad file descriptor 00:24:59.653 [2024-05-13 03:05:50.172250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da65c0 (9): Bad file descriptor 00:24:59.653 [2024-05-13 03:05:50.172269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f70cc0 (9): Bad file descriptor 00:24:59.653 [2024-05-13 03:05:50.172285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:59.653 [2024-05-13 03:05:50.172299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:59.653 [2024-05-13 03:05:50.172316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:59.653 [2024-05-13 03:05:50.172473] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:59.653 [2024-05-13 03:05:50.172573] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.653 [2024-05-13 03:05:50.172770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-05-13 03:05:50.172988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-05-13 03:05:50.173015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f4c640 with addr=10.0.0.2, port=4420 00:24:59.653 [2024-05-13 03:05:50.173031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c640 is same with the state(5) to be set 00:24:59.653 [2024-05-13 03:05:50.173062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f479e0 (9): Bad file descriptor 00:24:59.653 [2024-05-13 03:05:50.173081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd0370 (9): Bad file descriptor 00:24:59.653 [2024-05-13 03:05:50.173097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:59.653 [2024-05-13 03:05:50.173111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:59.653 [2024-05-13 03:05:50.173124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:59.653 [2024-05-13 03:05:50.173144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.653 [2024-05-13 03:05:50.173158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.653 [2024-05-13 03:05:50.173171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.653 [2024-05-13 03:05:50.173188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:59.653 [2024-05-13 03:05:50.173202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:59.654 [2024-05-13 03:05:50.173215] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:59.654 [2024-05-13 03:05:50.173585] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:59.654 [2024-05-13 03:05:50.173617] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.654 [2024-05-13 03:05:50.173635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.654 [2024-05-13 03:05:50.173647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.654 [2024-05-13 03:05:50.173662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c640 (9): Bad file descriptor 00:24:59.654 [2024-05-13 03:05:50.173680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:59.654 [2024-05-13 03:05:50.173693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:59.654 [2024-05-13 03:05:50.173716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:59.654 [2024-05-13 03:05:50.173750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:59.654 [2024-05-13 03:05:50.173765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:59.654 [2024-05-13 03:05:50.173779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:59.654 [2024-05-13 03:05:50.173840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.654 [2024-05-13 03:05:50.173861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.654 [2024-05-13 03:05:50.173874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:59.654 [2024-05-13 03:05:50.173887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:59.654 [2024-05-13 03:05:50.173901] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:59.654 [2024-05-13 03:05:50.173950] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.654 [2024-05-13 03:05:50.174068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dee370 (9): Bad file descriptor 00:24:59.654 [2024-05-13 03:05:50.174103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dee8d0 (9): Bad file descriptor 00:24:59.654 [2024-05-13 03:05:50.174221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.174963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.174989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.175003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.175020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.175033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.175058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.175072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.175089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.175103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.175120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.175134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.175151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.175165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.175181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.175195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.175211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.175225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.175242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.654 [2024-05-13 03:05:50.175255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.654 [2024-05-13 03:05:50.175271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.175975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.175997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.176027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.176057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.176087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.176122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.176153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.176185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.176215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.176245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.655 [2024-05-13 03:05:50.176275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.655 [2024-05-13 03:05:50.176291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27f90 is same with the state(5) to be set 00:24:59.656 [2024-05-13 03:05:50.177551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:59.656 [2024-05-13 03:05:50.177580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:59.656 [2024-05-13 03:05:50.177950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.178159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.178185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de2c20 with addr=10.0.0.2, port=4420 00:24:59.656 [2024-05-13 03:05:50.178202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2c20 is same with the state(5) to be set 00:24:59.656 [2024-05-13 03:05:50.178376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.178564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.178589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f61cb0 with addr=10.0.0.2, port=4420 00:24:59.656 [2024-05-13 03:05:50.178604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61cb0 is same with the state(5) to be set 00:24:59.656 [2024-05-13 03:05:50.178912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:59.656 [2024-05-13 03:05:50.178938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.656 [2024-05-13 03:05:50.178955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:59.656 [2024-05-13 03:05:50.178971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:59.656 [2024-05-13 03:05:50.178988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:59.656 [2024-05-13 03:05:50.179050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2c20 (9): Bad file descriptor 00:24:59.656 [2024-05-13 03:05:50.179073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f61cb0 (9): Bad file descriptor 00:24:59.656 [2024-05-13 03:05:50.179306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.179484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.179511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f70cc0 with addr=10.0.0.2, port=4420 00:24:59.656 [2024-05-13 03:05:50.179528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cc0 is same with the state(5) to be set 00:24:59.656 [2024-05-13 03:05:50.179712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.179913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.179940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da65c0 with addr=10.0.0.2, port=4420 00:24:59.656 [2024-05-13 03:05:50.179956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da65c0 is same with the state(5) to be set 00:24:59.656 [2024-05-13 03:05:50.180147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.180327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.180353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5e730 with addr=10.0.0.2, port=4420 00:24:59.656 [2024-05-13 03:05:50.180369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e730 is same with the state(5) to be set 00:24:59.656 [2024-05-13 03:05:50.180559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.180757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.180783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd0370 with addr=10.0.0.2, port=4420 00:24:59.656 [2024-05-13 03:05:50.180799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0370 is same with the state(5) to be set 00:24:59.656 [2024-05-13 03:05:50.180976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.181171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.181197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f479e0 with addr=10.0.0.2, port=4420 00:24:59.656 [2024-05-13 03:05:50.181213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f479e0 is same with the state(5) to be set 00:24:59.656 [2024-05-13 03:05:50.181229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:59.656 [2024-05-13 03:05:50.181242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:59.656 [2024-05-13 03:05:50.181258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:59.656 [2024-05-13 03:05:50.181278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:59.656 [2024-05-13 03:05:50.181292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:59.656 [2024-05-13 03:05:50.181305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:59.656 [2024-05-13 03:05:50.181356] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.656 [2024-05-13 03:05:50.181376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.656 [2024-05-13 03:05:50.181392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f70cc0 (9): Bad file descriptor 00:24:59.656 [2024-05-13 03:05:50.181411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da65c0 (9): Bad file descriptor 00:24:59.656 [2024-05-13 03:05:50.181438] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5e730 (9): Bad file descriptor 00:24:59.656 [2024-05-13 03:05:50.181457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd0370 (9): Bad file descriptor 00:24:59.656 [2024-05-13 03:05:50.181475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f479e0 (9): Bad file descriptor 00:24:59.656 [2024-05-13 03:05:50.181524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:59.656 [2024-05-13 03:05:50.181543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:59.656 [2024-05-13 03:05:50.181557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:59.656 [2024-05-13 03:05:50.181574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.656 [2024-05-13 03:05:50.181589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.656 [2024-05-13 03:05:50.181602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.656 [2024-05-13 03:05:50.181618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:59.656 [2024-05-13 03:05:50.181631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:59.656 [2024-05-13 03:05:50.181644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:59.656 [2024-05-13 03:05:50.181661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:59.656 [2024-05-13 03:05:50.181675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:59.656 [2024-05-13 03:05:50.181688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:59.656 [2024-05-13 03:05:50.181711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:59.656 [2024-05-13 03:05:50.181726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:59.656 [2024-05-13 03:05:50.181742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:59.656 [2024-05-13 03:05:50.181779] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:59.656 [2024-05-13 03:05:50.181800] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.656 [2024-05-13 03:05:50.181813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.656 [2024-05-13 03:05:50.181825] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.656 [2024-05-13 03:05:50.181838] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.656 [2024-05-13 03:05:50.181850] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.656 [2024-05-13 03:05:50.182330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.182729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.656 [2024-05-13 03:05:50.182767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f4c640 with addr=10.0.0.2, port=4420 00:24:59.656 [2024-05-13 03:05:50.182783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c640 is same with the state(5) to be set 00:24:59.656 [2024-05-13 03:05:50.182822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c640 (9): Bad file descriptor 00:24:59.656 [2024-05-13 03:05:50.182861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:59.656 [2024-05-13 03:05:50.182882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:59.656 [2024-05-13 03:05:50.182897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:59.656 [2024-05-13 03:05:50.182934] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.656 [2024-05-13 03:05:50.184182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.656 [2024-05-13 03:05:50.184209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.656 [2024-05-13 03:05:50.184239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.656 [2024-05-13 03:05:50.184262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.656 [2024-05-13 03:05:50.184279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.656 [2024-05-13 03:05:50.184293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.656 [2024-05-13 03:05:50.184310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.656 [2024-05-13 03:05:50.184324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.656 [2024-05-13 03:05:50.184340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.656 [2024-05-13 03:05:50.184354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.656 [2024-05-13 03:05:50.184371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.656 [2024-05-13 03:05:50.184385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.656 [2024-05-13 03:05:50.184402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.656 [2024-05-13 03:05:50.184416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.184983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.184999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.657 [2024-05-13 03:05:50.185691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.657 [2024-05-13 03:05:50.185715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.185730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.185756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.185770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.185789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.185804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.185820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.185834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.185851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.185865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.185881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.185895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.185911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.185925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.185941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.185955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.185971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.185985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.186011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.186025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.186041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.186055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.186071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.186086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.186102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.186116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.186132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.186148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.186165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.186183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.186200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.186214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.186231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.186244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.186259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2a990 is same with the state(5) to be set 00:24:59.658 [2024-05-13 03:05:50.187531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.187973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.187988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.188029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.188060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.188090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.188121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.188151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.188181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.188211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.188241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.658 [2024-05-13 03:05:50.188275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.658 [2024-05-13 03:05:50.188291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.188964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.188991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-05-13 03:05:50.189416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.659 [2024-05-13 03:05:50.189432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-05-13 03:05:50.189445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.660 [2024-05-13 03:05:50.189467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-05-13 03:05:50.189482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.660 [2024-05-13 03:05:50.189498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-05-13 03:05:50.189513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.660 [2024-05-13 03:05:50.189529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-05-13 03:05:50.189543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.660 [2024-05-13 03:05:50.189560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-05-13 03:05:50.189574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.660 [2024-05-13 03:05:50.189589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e330d0 is same with the state(5) to be set 00:24:59.660 [2024-05-13 03:05:50.191869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:59.660 task offset: 24576 on job bdev=Nvme10n1 fails 00:24:59.660 00:24:59.660 Latency(us) 00:24:59.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.660 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme1n1 ended in about 0.91 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme1n1 : 0.91 151.49 9.47 70.26 0.00 285406.76 20874.43 239230.67 00:24:59.660 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme2n1 ended in about 0.92 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme2n1 : 0.92 211.94 13.25 69.20 0.00 220495.22 15049.01 209715.20 00:24:59.660 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme3n1 ended in about 0.89 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme3n1 : 0.89 143.36 8.96 71.68 0.00 281950.06 8349.77 295154.73 00:24:59.660 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme4n1 ended in about 0.93 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme4n1 : 0.93 68.59 4.29 68.59 0.00 434195.72 62914.56 354185.67 00:24:59.660 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme5n1 ended in about 0.90 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme5n1 : 0.90 75.20 4.70 70.77 0.00 398390.48 41943.04 365059.79 00:24:59.660 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme6n1 ended in about 0.94 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme6n1 : 0.94 204.27 12.77 68.09 0.00 209510.97 21262.79 186413.51 00:24:59.660 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme7n1 ended in about 0.93 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme7n1 : 0.93 207.26 12.95 69.09 0.00 201637.74 23495.87 243891.01 00:24:59.660 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme8n1 ended in about 0.95 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme8n1 : 0.95 134.75 8.42 67.38 0.00 270873.03 42525.58 290494.39 00:24:59.660 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme9n1 ended in about 0.95 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme9n1 : 0.95 134.28 8.39 67.14 0.00 266032.67 23787.14 278066.82 00:24:59.660 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.660 Job: Nvme10n1 ended in about 0.89 seconds with error 00:24:59.660 Verification LBA range: start 0x0 length 0x400 00:24:59.660 Nvme10n1 : 0.89 215.36 13.46 71.79 0.00 179658.15 8398.32 226803.11 00:24:59.660 =================================================================================================================== 00:24:59.660 Total : 1546.49 96.66 693.98 0.00 257314.80 8349.77 365059.79 00:24:59.660 [2024-05-13 03:05:50.219393] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:59.660 [2024-05-13 03:05:50.219470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:59.660 [2024-05-13 03:05:50.220112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.220319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.220348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dee370 with addr=10.0.0.2, port=4420 00:24:59.660 [2024-05-13 03:05:50.220369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee370 is same with the state(5) to be set 00:24:59.660 [2024-05-13 03:05:50.220562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.220765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.220794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dee8d0 with addr=10.0.0.2, port=4420 00:24:59.660 [2024-05-13 03:05:50.220811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee8d0 is same with the state(5) to be set 00:24:59.660 [2024-05-13 03:05:50.220854] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:59.660 [2024-05-13 03:05:50.220878] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:59.660 [2024-05-13 03:05:50.220896] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:59.660 [2024-05-13 03:05:50.220915] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:59.660 [2024-05-13 03:05:50.220933] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:59.660 [2024-05-13 03:05:50.220951] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:59.660 [2024-05-13 03:05:50.220969] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:59.660 [2024-05-13 03:05:50.221526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:59.660 [2024-05-13 03:05:50.221553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:59.660 [2024-05-13 03:05:50.221570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:59.660 [2024-05-13 03:05:50.221586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:59.660 [2024-05-13 03:05:50.221601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:59.660 [2024-05-13 03:05:50.221618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.660 [2024-05-13 03:05:50.221633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:59.660 [2024-05-13 03:05:50.221755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dee370 (9): Bad file descriptor 00:24:59.660 [2024-05-13 03:05:50.221786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dee8d0 (9): Bad file descriptor 00:24:59.660 [2024-05-13 03:05:50.222311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.222503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.222530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f61cb0 with addr=10.0.0.2, port=4420 00:24:59.660 [2024-05-13 03:05:50.222547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61cb0 is same with the state(5) to be set 00:24:59.660 [2024-05-13 03:05:50.222720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.222923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.222949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de2c20 with addr=10.0.0.2, port=4420 00:24:59.660 [2024-05-13 03:05:50.222965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2c20 is same with the state(5) to be set 00:24:59.660 [2024-05-13 03:05:50.223182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.223398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.223424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f479e0 with addr=10.0.0.2, port=4420 00:24:59.660 [2024-05-13 03:05:50.223440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f479e0 is same with the state(5) to be set 00:24:59.660 [2024-05-13 03:05:50.223619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.223806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.223832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd0370 with addr=10.0.0.2, port=4420 00:24:59.660 [2024-05-13 03:05:50.223848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd0370 is same with the state(5) to be set 00:24:59.660 [2024-05-13 03:05:50.224023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.224206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.224231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5e730 with addr=10.0.0.2, port=4420 00:24:59.660 [2024-05-13 03:05:50.224248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5e730 is same with the state(5) to be set 00:24:59.660 [2024-05-13 03:05:50.224432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.224626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.660 [2024-05-13 03:05:50.224652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da65c0 with addr=10.0.0.2, port=4420 00:24:59.661 [2024-05-13 03:05:50.224668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da65c0 is same with the state(5) to be set 00:24:59.661 [2024-05-13 03:05:50.224865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.661 [2024-05-13 03:05:50.225080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.661 [2024-05-13 03:05:50.225106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f70cc0 with addr=10.0.0.2, port=4420 00:24:59.661 [2024-05-13 03:05:50.225123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cc0 is same with the state(5) to be set 00:24:59.661 [2024-05-13 03:05:50.225139] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.225152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.225176] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:59.661 [2024-05-13 03:05:50.225197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.225212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.225225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:59.661 [2024-05-13 03:05:50.225929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:59.661 [2024-05-13 03:05:50.225974] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.661 [2024-05-13 03:05:50.226000] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.661 [2024-05-13 03:05:50.226032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f61cb0 (9): Bad file descriptor 00:24:59.661 [2024-05-13 03:05:50.226067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2c20 (9): Bad file descriptor 00:24:59.661 [2024-05-13 03:05:50.226085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f479e0 (9): Bad file descriptor 00:24:59.661 [2024-05-13 03:05:50.226103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd0370 (9): Bad file descriptor 00:24:59.661 [2024-05-13 03:05:50.226120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5e730 (9): Bad file descriptor 00:24:59.661 [2024-05-13 03:05:50.226137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da65c0 (9): Bad file descriptor 00:24:59.661 [2024-05-13 03:05:50.226154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f70cc0 (9): Bad file descriptor 00:24:59.661 [2024-05-13 03:05:50.226388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.661 [2024-05-13 03:05:50.226615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.661 [2024-05-13 03:05:50.226642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f4c640 with addr=10.0.0.2, port=4420 00:24:59.661 [2024-05-13 03:05:50.226658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c640 is same with the state(5) to be set 00:24:59.661 [2024-05-13 03:05:50.226682] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.226703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.226718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:59.661 [2024-05-13 03:05:50.226736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.226756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.226770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:59.661 [2024-05-13 03:05:50.226786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.226800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.226812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:59.661 [2024-05-13 03:05:50.226828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.226842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.226860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:59.661 [2024-05-13 03:05:50.226877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.226890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.226903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:59.661 [2024-05-13 03:05:50.226919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.226933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.226946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.661 [2024-05-13 03:05:50.226962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.226976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.226989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:59.661 [2024-05-13 03:05:50.227027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.661 [2024-05-13 03:05:50.227056] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.661 [2024-05-13 03:05:50.227068] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.661 [2024-05-13 03:05:50.227080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.661 [2024-05-13 03:05:50.227093] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.661 [2024-05-13 03:05:50.227105] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.661 [2024-05-13 03:05:50.227121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c640 (9): Bad file descriptor 00:24:59.661 [2024-05-13 03:05:50.227150] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.661 [2024-05-13 03:05:50.227181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:59.661 [2024-05-13 03:05:50.227198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:59.661 [2024-05-13 03:05:50.227212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:59.661 [2024-05-13 03:05:50.227247] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.922 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:59.922 03:05:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 415360 00:25:01.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (415360) - No such process 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.302 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.302 rmmod nvme_tcp 00:25:01.302 rmmod nvme_fabrics 00:25:01.303 rmmod nvme_keyring 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.303 03:05:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.207 03:05:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.207 00:25:03.207 real 0m7.561s 00:25:03.207 user 0m18.070s 00:25:03.207 sys 0m1.630s 00:25:03.207 03:05:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:03.207 03:05:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:03.207 ************************************ 00:25:03.207 END TEST nvmf_shutdown_tc3 00:25:03.207 ************************************ 00:25:03.207 03:05:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:25:03.207 00:25:03.207 real 0m27.150s 00:25:03.207 user 1m14.888s 00:25:03.207 sys 0m6.462s 00:25:03.207 03:05:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:03.207 03:05:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:03.207 ************************************ 00:25:03.207 END TEST nvmf_shutdown 00:25:03.207 ************************************ 00:25:03.207 03:05:53 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:25:03.207 03:05:53 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.207 03:05:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.207 03:05:53 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:25:03.207 03:05:53 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:03.207 03:05:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.207 03:05:53 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:25:03.207 03:05:53 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:03.207 03:05:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:03.207 03:05:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:03.207 03:05:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.207 ************************************ 00:25:03.207 START TEST nvmf_multicontroller 00:25:03.207 ************************************ 00:25:03.207 03:05:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:03.207 * Looking for test storage... 00:25:03.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.208 03:05:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.739 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.739 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.740 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.740 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.740 03:05:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:05.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:25:05.740 00:25:05.740 --- 10.0.0.2 ping statistics --- 00:25:05.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.740 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:25:05.740 00:25:05.740 --- 10.0.0.1 ping statistics --- 00:25:05.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.740 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=417880 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 417880 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 417880 ']' 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.740 [2024-05-13 03:05:56.171876] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:25:05.740 [2024-05-13 03:05:56.171962] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.740 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.740 [2024-05-13 03:05:56.210868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:05.740 [2024-05-13 03:05:56.242296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:05.740 [2024-05-13 03:05:56.335063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.740 [2024-05-13 03:05:56.335112] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.740 [2024-05-13 03:05:56.335138] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.740 [2024-05-13 03:05:56.335152] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.740 [2024-05-13 03:05:56.335164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.740 [2024-05-13 03:05:56.335240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.740 [2024-05-13 03:05:56.335339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.740 [2024-05-13 03:05:56.335341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.740 [2024-05-13 03:05:56.462089] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.740 Malloc0 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.740 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.741 [2024-05-13 03:05:56.518002] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:05.741 [2024-05-13 03:05:56.518282] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.741 [2024-05-13 03:05:56.526118] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.741 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 Malloc1 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=417909 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 417909 /var/tmp/bdevperf.sock 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 417909 ']' 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.999 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:06.000 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.258 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:06.258 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:25:06.258 03:05:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:06.258 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.258 03:05:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.519 NVMe0n1 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.519 1 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.519 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.519 request: 00:25:06.519 { 00:25:06.519 "name": "NVMe0", 00:25:06.519 "trtype": "tcp", 00:25:06.519 "traddr": "10.0.0.2", 00:25:06.519 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:06.519 "hostaddr": "10.0.0.2", 00:25:06.519 "hostsvcid": "60000", 00:25:06.519 "adrfam": "ipv4", 00:25:06.519 "trsvcid": "4420", 00:25:06.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.519 "method": "bdev_nvme_attach_controller", 00:25:06.520 "req_id": 1 00:25:06.520 } 00:25:06.520 Got JSON-RPC error response 00:25:06.520 response: 00:25:06.520 { 00:25:06.520 "code": -114, 00:25:06.520 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:06.520 } 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 request: 00:25:06.520 { 00:25:06.520 "name": "NVMe0", 00:25:06.520 "trtype": "tcp", 00:25:06.520 "traddr": "10.0.0.2", 00:25:06.520 "hostaddr": "10.0.0.2", 00:25:06.520 "hostsvcid": "60000", 00:25:06.520 "adrfam": "ipv4", 00:25:06.520 "trsvcid": "4420", 00:25:06.520 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:06.520 "method": "bdev_nvme_attach_controller", 00:25:06.520 "req_id": 1 00:25:06.520 } 00:25:06.520 Got JSON-RPC error response 00:25:06.520 response: 00:25:06.520 { 00:25:06.520 "code": -114, 00:25:06.520 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:06.520 } 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 request: 00:25:06.520 { 00:25:06.520 "name": "NVMe0", 00:25:06.520 "trtype": "tcp", 00:25:06.520 "traddr": "10.0.0.2", 00:25:06.520 "hostaddr": "10.0.0.2", 00:25:06.520 "hostsvcid": "60000", 00:25:06.520 "adrfam": "ipv4", 00:25:06.520 "trsvcid": "4420", 00:25:06.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.520 "multipath": "disable", 00:25:06.520 "method": "bdev_nvme_attach_controller", 00:25:06.520 "req_id": 1 00:25:06.520 } 00:25:06.520 Got JSON-RPC error response 00:25:06.520 response: 00:25:06.520 { 00:25:06.520 "code": -114, 00:25:06.520 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:06.520 } 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 request: 00:25:06.520 { 00:25:06.520 "name": "NVMe0", 00:25:06.520 "trtype": "tcp", 00:25:06.520 "traddr": "10.0.0.2", 00:25:06.520 "hostaddr": "10.0.0.2", 00:25:06.520 "hostsvcid": "60000", 00:25:06.520 "adrfam": "ipv4", 00:25:06.520 "trsvcid": "4420", 00:25:06.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.520 "multipath": "failover", 00:25:06.520 "method": "bdev_nvme_attach_controller", 00:25:06.520 "req_id": 1 00:25:06.520 } 00:25:06.520 Got JSON-RPC error response 00:25:06.520 response: 00:25:06.520 { 00:25:06.520 "code": -114, 00:25:06.520 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:06.520 } 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:06.520 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.779 03:05:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.779 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:06.779 03:05:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:07.714 0 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 417909 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 417909 ']' 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 417909 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 417909 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 417909' 00:25:07.714 killing process with pid 417909 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 417909 00:25:07.714 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 417909 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:25:07.974 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:07.974 [2024-05-13 03:05:56.621808] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:25:07.974 [2024-05-13 03:05:56.621911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417909 ] 00:25:07.974 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.974 [2024-05-13 03:05:56.655133] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:07.974 [2024-05-13 03:05:56.683536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.974 [2024-05-13 03:05:56.769610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.974 [2024-05-13 03:05:57.314983] bdev.c:4555:bdev_name_add: *ERROR*: Bdev name 03bb2ce8-4e63-4f7f-9424-dc5846f396f6 already exists 00:25:07.974 [2024-05-13 03:05:57.315026] bdev.c:7672:bdev_register: *ERROR*: Unable to add uuid:03bb2ce8-4e63-4f7f-9424-dc5846f396f6 alias for bdev NVMe1n1 00:25:07.974 [2024-05-13 03:05:57.315043] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:07.974 Running I/O for 1 seconds... 00:25:07.974 00:25:07.974 Latency(us) 00:25:07.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.974 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:07.974 NVMe0n1 : 1.01 19058.03 74.45 0.00 0.00 6697.69 5145.79 17573.36 00:25:07.974 =================================================================================================================== 00:25:07.974 Total : 19058.03 74.45 0.00 0.00 6697.69 5145.79 17573.36 00:25:07.974 Received shutdown signal, test time was about 1.000000 seconds 00:25:07.974 00:25:07.974 Latency(us) 00:25:07.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.974 =================================================================================================================== 00:25:07.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.974 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.974 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.974 rmmod nvme_tcp 00:25:07.974 rmmod nvme_fabrics 00:25:08.233 rmmod nvme_keyring 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 417880 ']' 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 417880 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 417880 ']' 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 417880 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 417880 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 417880' 00:25:08.233 killing process with pid 417880 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 417880 00:25:08.233 [2024-05-13 03:05:58.822819] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:08.233 03:05:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 417880 00:25:08.493 03:05:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.493 03:05:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.493 03:05:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.493 03:05:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.493 03:05:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.493 03:05:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.493 03:05:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.493 03:05:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.396 03:06:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.396 00:25:10.396 real 0m7.274s 00:25:10.396 user 0m10.420s 00:25:10.396 sys 0m2.530s 00:25:10.396 03:06:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:10.396 03:06:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:10.396 ************************************ 00:25:10.396 END TEST nvmf_multicontroller 00:25:10.396 ************************************ 00:25:10.396 03:06:01 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:10.396 03:06:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:10.396 03:06:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:10.396 03:06:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.655 ************************************ 00:25:10.655 START TEST nvmf_aer 00:25:10.655 ************************************ 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:10.655 * Looking for test storage... 00:25:10.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.655 03:06:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:12.558 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:12.558 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:12.558 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:12.558 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.558 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:12.559 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:12.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:25:12.817 00:25:12.817 --- 10.0.0.2 ping statistics --- 00:25:12.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.817 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:25:12.817 00:25:12.817 --- 10.0.0.1 ping statistics --- 00:25:12.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.817 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=420232 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 420232 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 420232 ']' 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:12.817 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:12.817 [2024-05-13 03:06:03.461024] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:25:12.817 [2024-05-13 03:06:03.461113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.817 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.817 [2024-05-13 03:06:03.499265] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:12.817 [2024-05-13 03:06:03.527714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:12.817 [2024-05-13 03:06:03.615916] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.817 [2024-05-13 03:06:03.615963] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.817 [2024-05-13 03:06:03.615977] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.817 [2024-05-13 03:06:03.615990] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.817 [2024-05-13 03:06:03.616001] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.817 [2024-05-13 03:06:03.616057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.817 [2024-05-13 03:06:03.616109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.817 [2024-05-13 03:06:03.616156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:12.817 [2024-05-13 03:06:03.616158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.074 [2024-05-13 03:06:03.770336] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.074 Malloc0 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.074 [2024-05-13 03:06:03.821539] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:13.074 [2024-05-13 03:06:03.821847] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.074 [ 00:25:13.074 { 00:25:13.074 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:13.074 "subtype": "Discovery", 00:25:13.074 "listen_addresses": [], 00:25:13.074 "allow_any_host": true, 00:25:13.074 "hosts": [] 00:25:13.074 }, 00:25:13.074 { 00:25:13.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.074 "subtype": "NVMe", 00:25:13.074 "listen_addresses": [ 00:25:13.074 { 00:25:13.074 "trtype": "TCP", 00:25:13.074 "adrfam": "IPv4", 00:25:13.074 "traddr": "10.0.0.2", 00:25:13.074 "trsvcid": "4420" 00:25:13.074 } 00:25:13.074 ], 00:25:13.074 "allow_any_host": true, 00:25:13.074 "hosts": [], 00:25:13.074 "serial_number": "SPDK00000000000001", 00:25:13.074 "model_number": "SPDK bdev Controller", 00:25:13.074 "max_namespaces": 2, 00:25:13.074 "min_cntlid": 1, 00:25:13.074 "max_cntlid": 65519, 00:25:13.074 "namespaces": [ 00:25:13.074 { 00:25:13.074 "nsid": 1, 00:25:13.074 "bdev_name": "Malloc0", 00:25:13.074 "name": "Malloc0", 00:25:13.074 "nguid": "21103FCA452A47729E27BEFA5900E99F", 00:25:13.074 "uuid": "21103fca-452a-4772-9e27-befa5900e99f" 00:25:13.074 } 00:25:13.074 ] 00:25:13.074 } 00:25:13.074 ] 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=420372 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:25:13.074 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:25:13.332 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.332 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:13.332 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:25:13.332 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:25:13.332 03:06:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.332 Malloc1 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.332 [ 00:25:13.332 { 00:25:13.332 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:13.332 "subtype": "Discovery", 00:25:13.332 "listen_addresses": [], 00:25:13.332 "allow_any_host": true, 00:25:13.332 "hosts": [] 00:25:13.332 }, 00:25:13.332 { 00:25:13.332 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.332 "subtype": "NVMe", 00:25:13.332 "listen_addresses": [ 00:25:13.332 { 00:25:13.332 "trtype": "TCP", 00:25:13.332 "adrfam": "IPv4", 00:25:13.332 "traddr": "10.0.0.2", 00:25:13.332 "trsvcid": "4420" 00:25:13.332 } 00:25:13.332 ], 00:25:13.332 "allow_any_host": true, 00:25:13.332 "hosts": [], 00:25:13.332 "serial_number": "SPDK00000000000001", 00:25:13.332 "model_number": "SPDK bdev Controller", 00:25:13.332 "max_namespaces": 2, 00:25:13.332 "min_cntlid": 1, 00:25:13.332 "max_cntlid": 65519, 00:25:13.332 "namespaces": [ 00:25:13.332 { 00:25:13.332 "nsid": 1, 00:25:13.332 "bdev_name": "Malloc0", 00:25:13.332 "name": "Malloc0", 00:25:13.332 "nguid": "21103FCA452A47729E27BEFA5900E99F", 00:25:13.332 "uuid": "21103fca-452a-4772-9e27-befa5900e99f" 00:25:13.332 }, 00:25:13.332 { 00:25:13.332 "nsid": 2, 00:25:13.332 "bdev_name": "Malloc1", 00:25:13.332 "name": "Malloc1", 00:25:13.332 "nguid": "FBC8B5C0813C4732A196B37FB73D7F64", 00:25:13.332 "uuid": "fbc8b5c0-813c-4732-a196-b37fb73d7f64" 00:25:13.332 } 00:25:13.332 ] 00:25:13.332 } 00:25:13.332 ] 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.332 03:06:04 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 420372 00:25:13.332 Asynchronous Event Request test 00:25:13.332 Attaching to 10.0.0.2 00:25:13.332 Attached to 10.0.0.2 00:25:13.332 Registering asynchronous event callbacks... 00:25:13.332 Starting namespace attribute notice tests for all controllers... 00:25:13.332 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:13.332 aer_cb - Changed Namespace 00:25:13.332 Cleaning up... 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.590 rmmod nvme_tcp 00:25:13.590 rmmod nvme_fabrics 00:25:13.590 rmmod nvme_keyring 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 420232 ']' 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 420232 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 420232 ']' 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 420232 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 420232 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 420232' 00:25:13.590 killing process with pid 420232 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 420232 00:25:13.590 [2024-05-13 03:06:04.299662] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:13.590 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 420232 00:25:13.849 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:13.849 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:13.849 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:13.849 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.849 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.849 03:06:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.849 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.849 03:06:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.381 03:06:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:16.381 00:25:16.381 real 0m5.358s 00:25:16.381 user 0m4.319s 00:25:16.381 sys 0m1.857s 00:25:16.381 03:06:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:16.381 03:06:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:16.381 ************************************ 00:25:16.381 END TEST nvmf_aer 00:25:16.381 ************************************ 00:25:16.381 03:06:06 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:16.381 03:06:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:16.381 03:06:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:16.381 03:06:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:16.381 ************************************ 00:25:16.381 START TEST nvmf_async_init 00:25:16.381 ************************************ 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:16.381 * Looking for test storage... 00:25:16.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b4b878c763644c3aaea473bba1dfdb12 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:25:16.381 03:06:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:18.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.282 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:18.283 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:18.283 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:18.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:25:18.283 00:25:18.283 --- 10.0.0.2 ping statistics --- 00:25:18.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.283 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:18.283 00:25:18.283 --- 10.0.0.1 ping statistics --- 00:25:18.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.283 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=422770 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 422770 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 422770 ']' 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:18.283 03:06:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.283 [2024-05-13 03:06:08.809335] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:25:18.283 [2024-05-13 03:06:08.809420] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.283 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.283 [2024-05-13 03:06:08.852558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:18.283 [2024-05-13 03:06:08.883199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.283 [2024-05-13 03:06:08.976192] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.283 [2024-05-13 03:06:08.976238] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.283 [2024-05-13 03:06:08.976255] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.283 [2024-05-13 03:06:08.976267] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.283 [2024-05-13 03:06:08.976279] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.283 [2024-05-13 03:06:08.976313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.284 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:18.284 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:25:18.284 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:18.284 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.284 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.541 [2024-05-13 03:06:09.106620] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.541 null0 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b4b878c763644c3aaea473bba1dfdb12 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.541 [2024-05-13 03:06:09.146659] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:18.541 [2024-05-13 03:06:09.146912] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.541 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.799 nvme0n1 00:25:18.799 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.799 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.800 [ 00:25:18.800 { 00:25:18.800 "name": "nvme0n1", 00:25:18.800 "aliases": [ 00:25:18.800 "b4b878c7-6364-4c3a-aea4-73bba1dfdb12" 00:25:18.800 ], 00:25:18.800 "product_name": "NVMe disk", 00:25:18.800 "block_size": 512, 00:25:18.800 "num_blocks": 2097152, 00:25:18.800 "uuid": "b4b878c7-6364-4c3a-aea4-73bba1dfdb12", 00:25:18.800 "assigned_rate_limits": { 00:25:18.800 "rw_ios_per_sec": 0, 00:25:18.800 "rw_mbytes_per_sec": 0, 00:25:18.800 "r_mbytes_per_sec": 0, 00:25:18.800 "w_mbytes_per_sec": 0 00:25:18.800 }, 00:25:18.800 "claimed": false, 00:25:18.800 "zoned": false, 00:25:18.800 "supported_io_types": { 00:25:18.800 "read": true, 00:25:18.800 "write": true, 00:25:18.800 "unmap": false, 00:25:18.800 "write_zeroes": true, 00:25:18.800 "flush": true, 00:25:18.800 "reset": true, 00:25:18.800 "compare": true, 00:25:18.800 "compare_and_write": true, 00:25:18.800 "abort": true, 00:25:18.800 "nvme_admin": true, 00:25:18.800 "nvme_io": true 00:25:18.800 }, 00:25:18.800 "memory_domains": [ 00:25:18.800 { 00:25:18.800 "dma_device_id": "system", 00:25:18.800 "dma_device_type": 1 00:25:18.800 } 00:25:18.800 ], 00:25:18.800 "driver_specific": { 00:25:18.800 "nvme": [ 00:25:18.800 { 00:25:18.800 "trid": { 00:25:18.800 "trtype": "TCP", 00:25:18.800 "adrfam": "IPv4", 00:25:18.800 "traddr": "10.0.0.2", 00:25:18.800 "trsvcid": "4420", 00:25:18.800 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:18.800 }, 00:25:18.800 "ctrlr_data": { 00:25:18.800 "cntlid": 1, 00:25:18.800 "vendor_id": "0x8086", 00:25:18.800 "model_number": "SPDK bdev Controller", 00:25:18.800 "serial_number": "00000000000000000000", 00:25:18.800 "firmware_revision": "24.05", 00:25:18.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.800 "oacs": { 00:25:18.800 "security": 0, 00:25:18.800 "format": 0, 00:25:18.800 "firmware": 0, 00:25:18.800 "ns_manage": 0 00:25:18.800 }, 00:25:18.800 "multi_ctrlr": true, 00:25:18.800 "ana_reporting": false 00:25:18.800 }, 00:25:18.800 "vs": { 00:25:18.800 "nvme_version": "1.3" 00:25:18.800 }, 00:25:18.800 "ns_data": { 00:25:18.800 "id": 1, 00:25:18.800 "can_share": true 00:25:18.800 } 00:25:18.800 } 00:25:18.800 ], 00:25:18.800 "mp_policy": "active_passive" 00:25:18.800 } 00:25:18.800 } 00:25:18.800 ] 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.800 [2024-05-13 03:06:09.395498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:18.800 [2024-05-13 03:06:09.395605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c7c10 (9): Bad file descriptor 00:25:18.800 [2024-05-13 03:06:09.527852] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.800 [ 00:25:18.800 { 00:25:18.800 "name": "nvme0n1", 00:25:18.800 "aliases": [ 00:25:18.800 "b4b878c7-6364-4c3a-aea4-73bba1dfdb12" 00:25:18.800 ], 00:25:18.800 "product_name": "NVMe disk", 00:25:18.800 "block_size": 512, 00:25:18.800 "num_blocks": 2097152, 00:25:18.800 "uuid": "b4b878c7-6364-4c3a-aea4-73bba1dfdb12", 00:25:18.800 "assigned_rate_limits": { 00:25:18.800 "rw_ios_per_sec": 0, 00:25:18.800 "rw_mbytes_per_sec": 0, 00:25:18.800 "r_mbytes_per_sec": 0, 00:25:18.800 "w_mbytes_per_sec": 0 00:25:18.800 }, 00:25:18.800 "claimed": false, 00:25:18.800 "zoned": false, 00:25:18.800 "supported_io_types": { 00:25:18.800 "read": true, 00:25:18.800 "write": true, 00:25:18.800 "unmap": false, 00:25:18.800 "write_zeroes": true, 00:25:18.800 "flush": true, 00:25:18.800 "reset": true, 00:25:18.800 "compare": true, 00:25:18.800 "compare_and_write": true, 00:25:18.800 "abort": true, 00:25:18.800 "nvme_admin": true, 00:25:18.800 "nvme_io": true 00:25:18.800 }, 00:25:18.800 "memory_domains": [ 00:25:18.800 { 00:25:18.800 "dma_device_id": "system", 00:25:18.800 "dma_device_type": 1 00:25:18.800 } 00:25:18.800 ], 00:25:18.800 "driver_specific": { 00:25:18.800 "nvme": [ 00:25:18.800 { 00:25:18.800 "trid": { 00:25:18.800 "trtype": "TCP", 00:25:18.800 "adrfam": "IPv4", 00:25:18.800 "traddr": "10.0.0.2", 00:25:18.800 "trsvcid": "4420", 00:25:18.800 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:18.800 }, 00:25:18.800 "ctrlr_data": { 00:25:18.800 "cntlid": 2, 00:25:18.800 "vendor_id": "0x8086", 00:25:18.800 "model_number": "SPDK bdev Controller", 00:25:18.800 "serial_number": "00000000000000000000", 00:25:18.800 "firmware_revision": "24.05", 00:25:18.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.800 "oacs": { 00:25:18.800 "security": 0, 00:25:18.800 "format": 0, 00:25:18.800 "firmware": 0, 00:25:18.800 "ns_manage": 0 00:25:18.800 }, 00:25:18.800 "multi_ctrlr": true, 00:25:18.800 "ana_reporting": false 00:25:18.800 }, 00:25:18.800 "vs": { 00:25:18.800 "nvme_version": "1.3" 00:25:18.800 }, 00:25:18.800 "ns_data": { 00:25:18.800 "id": 1, 00:25:18.800 "can_share": true 00:25:18.800 } 00:25:18.800 } 00:25:18.800 ], 00:25:18.800 "mp_policy": "active_passive" 00:25:18.800 } 00:25:18.800 } 00:25:18.800 ] 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.atmuwWCon5 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.atmuwWCon5 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.800 [2024-05-13 03:06:09.580108] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:18.800 [2024-05-13 03:06:09.580276] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.atmuwWCon5 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.800 [2024-05-13 03:06:09.588100] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.atmuwWCon5 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.800 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:18.800 [2024-05-13 03:06:09.596110] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:18.800 [2024-05-13 03:06:09.596183] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:19.058 nvme0n1 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:19.058 [ 00:25:19.058 { 00:25:19.058 "name": "nvme0n1", 00:25:19.058 "aliases": [ 00:25:19.058 "b4b878c7-6364-4c3a-aea4-73bba1dfdb12" 00:25:19.058 ], 00:25:19.058 "product_name": "NVMe disk", 00:25:19.058 "block_size": 512, 00:25:19.058 "num_blocks": 2097152, 00:25:19.058 "uuid": "b4b878c7-6364-4c3a-aea4-73bba1dfdb12", 00:25:19.058 "assigned_rate_limits": { 00:25:19.058 "rw_ios_per_sec": 0, 00:25:19.058 "rw_mbytes_per_sec": 0, 00:25:19.058 "r_mbytes_per_sec": 0, 00:25:19.058 "w_mbytes_per_sec": 0 00:25:19.058 }, 00:25:19.058 "claimed": false, 00:25:19.058 "zoned": false, 00:25:19.058 "supported_io_types": { 00:25:19.058 "read": true, 00:25:19.058 "write": true, 00:25:19.058 "unmap": false, 00:25:19.058 "write_zeroes": true, 00:25:19.058 "flush": true, 00:25:19.058 "reset": true, 00:25:19.058 "compare": true, 00:25:19.058 "compare_and_write": true, 00:25:19.058 "abort": true, 00:25:19.058 "nvme_admin": true, 00:25:19.058 "nvme_io": true 00:25:19.058 }, 00:25:19.058 "memory_domains": [ 00:25:19.058 { 00:25:19.058 "dma_device_id": "system", 00:25:19.058 "dma_device_type": 1 00:25:19.058 } 00:25:19.058 ], 00:25:19.058 "driver_specific": { 00:25:19.058 "nvme": [ 00:25:19.058 { 00:25:19.058 "trid": { 00:25:19.058 "trtype": "TCP", 00:25:19.058 "adrfam": "IPv4", 00:25:19.058 "traddr": "10.0.0.2", 00:25:19.058 "trsvcid": "4421", 00:25:19.058 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:19.058 }, 00:25:19.058 "ctrlr_data": { 00:25:19.058 "cntlid": 3, 00:25:19.058 "vendor_id": "0x8086", 00:25:19.058 "model_number": "SPDK bdev Controller", 00:25:19.058 "serial_number": "00000000000000000000", 00:25:19.058 "firmware_revision": "24.05", 00:25:19.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:19.058 "oacs": { 00:25:19.058 "security": 0, 00:25:19.058 "format": 0, 00:25:19.058 "firmware": 0, 00:25:19.058 "ns_manage": 0 00:25:19.058 }, 00:25:19.058 "multi_ctrlr": true, 00:25:19.058 "ana_reporting": false 00:25:19.058 }, 00:25:19.058 "vs": { 00:25:19.058 "nvme_version": "1.3" 00:25:19.058 }, 00:25:19.058 "ns_data": { 00:25:19.058 "id": 1, 00:25:19.058 "can_share": true 00:25:19.058 } 00:25:19.058 } 00:25:19.058 ], 00:25:19.058 "mp_policy": "active_passive" 00:25:19.058 } 00:25:19.058 } 00:25:19.058 ] 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.atmuwWCon5 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:19.058 rmmod nvme_tcp 00:25:19.058 rmmod nvme_fabrics 00:25:19.058 rmmod nvme_keyring 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 422770 ']' 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 422770 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 422770 ']' 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 422770 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 422770 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 422770' 00:25:19.058 killing process with pid 422770 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 422770 00:25:19.058 [2024-05-13 03:06:09.771376] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:19.058 [2024-05-13 03:06:09.771407] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:19.058 [2024-05-13 03:06:09.771436] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:19.058 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 422770 00:25:19.316 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:19.316 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:19.316 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:19.316 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.316 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:19.316 03:06:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.316 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.316 03:06:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.848 03:06:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:21.848 00:25:21.848 real 0m5.388s 00:25:21.848 user 0m2.010s 00:25:21.848 sys 0m1.755s 00:25:21.848 03:06:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:21.848 03:06:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:21.848 ************************************ 00:25:21.848 END TEST nvmf_async_init 00:25:21.848 ************************************ 00:25:21.848 03:06:12 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:21.848 03:06:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:21.848 03:06:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:21.848 03:06:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.848 ************************************ 00:25:21.848 START TEST dma 00:25:21.848 ************************************ 00:25:21.848 03:06:12 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:21.848 * Looking for test storage... 00:25:21.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.848 03:06:12 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.848 03:06:12 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.848 03:06:12 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.848 03:06:12 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.848 03:06:12 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.848 03:06:12 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.848 03:06:12 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.848 03:06:12 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:25:21.848 03:06:12 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.848 03:06:12 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.848 03:06:12 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:21.848 03:06:12 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:25:21.848 00:25:21.848 real 0m0.070s 00:25:21.848 user 0m0.030s 00:25:21.848 sys 0m0.045s 00:25:21.848 03:06:12 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:21.848 03:06:12 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:25:21.848 ************************************ 00:25:21.848 END TEST dma 00:25:21.848 ************************************ 00:25:21.848 03:06:12 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:21.848 03:06:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:21.848 03:06:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:21.848 03:06:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.848 ************************************ 00:25:21.848 START TEST nvmf_identify 00:25:21.848 ************************************ 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:21.848 * Looking for test storage... 00:25:21.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.848 03:06:12 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.849 03:06:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:23.751 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:23.751 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:23.751 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:23.751 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.751 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:24.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:25:24.048 00:25:24.048 --- 10.0.0.2 ping statistics --- 00:25:24.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.048 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:25:24.048 00:25:24.048 --- 10.0.0.1 ping statistics --- 00:25:24.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.048 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=424943 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 424943 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 424943 ']' 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:24.048 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.048 [2024-05-13 03:06:14.625729] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:25:24.048 [2024-05-13 03:06:14.625807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.048 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.048 [2024-05-13 03:06:14.666513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:24.048 [2024-05-13 03:06:14.693911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:24.048 [2024-05-13 03:06:14.780148] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.048 [2024-05-13 03:06:14.780198] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.048 [2024-05-13 03:06:14.780236] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.048 [2024-05-13 03:06:14.780249] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.048 [2024-05-13 03:06:14.780259] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.048 [2024-05-13 03:06:14.780352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.048 [2024-05-13 03:06:14.780375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.048 [2024-05-13 03:06:14.780442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.048 [2024-05-13 03:06:14.780444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.308 [2024-05-13 03:06:14.904328] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.308 Malloc0 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.308 [2024-05-13 03:06:14.974921] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:24.308 [2024-05-13 03:06:14.975223] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.308 [ 00:25:24.308 { 00:25:24.308 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:24.308 "subtype": "Discovery", 00:25:24.308 "listen_addresses": [ 00:25:24.308 { 00:25:24.308 "trtype": "TCP", 00:25:24.308 "adrfam": "IPv4", 00:25:24.308 "traddr": "10.0.0.2", 00:25:24.308 "trsvcid": "4420" 00:25:24.308 } 00:25:24.308 ], 00:25:24.308 "allow_any_host": true, 00:25:24.308 "hosts": [] 00:25:24.308 }, 00:25:24.308 { 00:25:24.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.308 "subtype": "NVMe", 00:25:24.308 "listen_addresses": [ 00:25:24.308 { 00:25:24.308 "trtype": "TCP", 00:25:24.308 "adrfam": "IPv4", 00:25:24.308 "traddr": "10.0.0.2", 00:25:24.308 "trsvcid": "4420" 00:25:24.308 } 00:25:24.308 ], 00:25:24.308 "allow_any_host": true, 00:25:24.308 "hosts": [], 00:25:24.308 "serial_number": "SPDK00000000000001", 00:25:24.308 "model_number": "SPDK bdev Controller", 00:25:24.308 "max_namespaces": 32, 00:25:24.308 "min_cntlid": 1, 00:25:24.308 "max_cntlid": 65519, 00:25:24.308 "namespaces": [ 00:25:24.308 { 00:25:24.308 "nsid": 1, 00:25:24.308 "bdev_name": "Malloc0", 00:25:24.308 "name": "Malloc0", 00:25:24.308 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:24.308 "eui64": "ABCDEF0123456789", 00:25:24.308 "uuid": "fd8a4fe4-725d-40e2-985e-600291093bc4" 00:25:24.308 } 00:25:24.308 ] 00:25:24.308 } 00:25:24.308 ] 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.308 03:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:24.308 [2024-05-13 03:06:15.012272] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:25:24.308 [2024-05-13 03:06:15.012311] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424967 ] 00:25:24.308 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.308 [2024-05-13 03:06:15.029363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:24.308 [2024-05-13 03:06:15.047050] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:24.309 [2024-05-13 03:06:15.047107] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:24.309 [2024-05-13 03:06:15.047117] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:24.309 [2024-05-13 03:06:15.047134] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:24.309 [2024-05-13 03:06:15.047147] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:24.309 [2024-05-13 03:06:15.047481] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:24.309 [2024-05-13 03:06:15.047536] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x198d450 0 00:25:24.309 [2024-05-13 03:06:15.061725] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:24.309 [2024-05-13 03:06:15.061747] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:24.309 [2024-05-13 03:06:15.061755] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:24.309 [2024-05-13 03:06:15.061762] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:24.309 [2024-05-13 03:06:15.061828] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.061842] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.061850] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.309 [2024-05-13 03:06:15.061869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:24.309 [2024-05-13 03:06:15.061896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.309 [2024-05-13 03:06:15.069720] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.309 [2024-05-13 03:06:15.069738] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.309 [2024-05-13 03:06:15.069746] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.069753] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4800) on tqpair=0x198d450 00:25:24.309 [2024-05-13 03:06:15.069783] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:24.309 [2024-05-13 03:06:15.069794] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:24.309 [2024-05-13 03:06:15.069804] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:24.309 [2024-05-13 03:06:15.069824] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.069832] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.069839] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.309 [2024-05-13 03:06:15.069850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.309 [2024-05-13 03:06:15.069874] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.309 [2024-05-13 03:06:15.070073] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.309 [2024-05-13 03:06:15.070089] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.309 [2024-05-13 03:06:15.070097] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.070104] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4800) on tqpair=0x198d450 00:25:24.309 [2024-05-13 03:06:15.070115] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:24.309 [2024-05-13 03:06:15.070128] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:24.309 [2024-05-13 03:06:15.070140] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.070148] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.070154] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.309 [2024-05-13 03:06:15.070165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.309 [2024-05-13 03:06:15.070187] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.309 [2024-05-13 03:06:15.070372] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.309 [2024-05-13 03:06:15.070384] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.309 [2024-05-13 03:06:15.070391] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.070397] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4800) on tqpair=0x198d450 00:25:24.309 [2024-05-13 03:06:15.070408] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:24.309 [2024-05-13 03:06:15.070422] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:24.309 [2024-05-13 03:06:15.070434] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.070441] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.070448] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.309 [2024-05-13 03:06:15.070459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.309 [2024-05-13 03:06:15.070479] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.309 [2024-05-13 03:06:15.070664] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.309 [2024-05-13 03:06:15.070690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.309 [2024-05-13 03:06:15.070706] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.070713] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4800) on tqpair=0x198d450 00:25:24.309 [2024-05-13 03:06:15.070725] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:24.309 [2024-05-13 03:06:15.070742] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.070752] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.070758] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.309 [2024-05-13 03:06:15.070769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.309 [2024-05-13 03:06:15.070790] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.309 [2024-05-13 03:06:15.070973] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.309 [2024-05-13 03:06:15.070988] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.309 [2024-05-13 03:06:15.070995] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.071006] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4800) on tqpair=0x198d450 00:25:24.309 [2024-05-13 03:06:15.071017] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:24.309 [2024-05-13 03:06:15.071026] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:24.309 [2024-05-13 03:06:15.071039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:24.309 [2024-05-13 03:06:15.071160] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:24.309 [2024-05-13 03:06:15.071169] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:24.309 [2024-05-13 03:06:15.071184] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.071192] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.071198] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.309 [2024-05-13 03:06:15.071208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.309 [2024-05-13 03:06:15.071229] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.309 [2024-05-13 03:06:15.071427] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.309 [2024-05-13 03:06:15.071440] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.309 [2024-05-13 03:06:15.071446] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.071453] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4800) on tqpair=0x198d450 00:25:24.309 [2024-05-13 03:06:15.071463] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:24.309 [2024-05-13 03:06:15.071479] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.071488] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.071494] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.309 [2024-05-13 03:06:15.071505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.309 [2024-05-13 03:06:15.071526] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.309 [2024-05-13 03:06:15.071722] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.309 [2024-05-13 03:06:15.071737] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.309 [2024-05-13 03:06:15.071744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.071751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4800) on tqpair=0x198d450 00:25:24.309 [2024-05-13 03:06:15.071761] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:24.309 [2024-05-13 03:06:15.071769] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:24.309 [2024-05-13 03:06:15.071783] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:24.309 [2024-05-13 03:06:15.071797] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:24.309 [2024-05-13 03:06:15.071813] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.071821] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.309 [2024-05-13 03:06:15.071835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.309 [2024-05-13 03:06:15.071857] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.309 [2024-05-13 03:06:15.072085] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.309 [2024-05-13 03:06:15.072101] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.309 [2024-05-13 03:06:15.072108] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.309 [2024-05-13 03:06:15.072115] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198d450): datao=0, datal=4096, cccid=0 00:25:24.309 [2024-05-13 03:06:15.072123] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19f4800) on tqpair(0x198d450): expected_datao=0, payload_size=4096 00:25:24.309 [2024-05-13 03:06:15.072131] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.310 [2024-05-13 03:06:15.072191] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.310 [2024-05-13 03:06:15.072202] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.113711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.571 [2024-05-13 03:06:15.113730] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.571 [2024-05-13 03:06:15.113738] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.113745] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4800) on tqpair=0x198d450 00:25:24.571 [2024-05-13 03:06:15.113759] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:24.571 [2024-05-13 03:06:15.113769] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:24.571 [2024-05-13 03:06:15.113777] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:24.571 [2024-05-13 03:06:15.113786] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:24.571 [2024-05-13 03:06:15.113793] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:24.571 [2024-05-13 03:06:15.113802] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:24.571 [2024-05-13 03:06:15.113817] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:24.571 [2024-05-13 03:06:15.113835] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.113844] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.113851] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.571 [2024-05-13 03:06:15.113862] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.571 [2024-05-13 03:06:15.113885] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.571 [2024-05-13 03:06:15.114085] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.571 [2024-05-13 03:06:15.114101] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.571 [2024-05-13 03:06:15.114108] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.114115] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4800) on tqpair=0x198d450 00:25:24.571 [2024-05-13 03:06:15.114129] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.114137] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.114143] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x198d450) 00:25:24.571 [2024-05-13 03:06:15.114153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.571 [2024-05-13 03:06:15.114176] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.114184] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.114190] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x198d450) 00:25:24.571 [2024-05-13 03:06:15.114199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.571 [2024-05-13 03:06:15.114209] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.114216] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.571 [2024-05-13 03:06:15.114222] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x198d450) 00:25:24.572 [2024-05-13 03:06:15.114231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.572 [2024-05-13 03:06:15.114240] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.114247] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.114253] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198d450) 00:25:24.572 [2024-05-13 03:06:15.114262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.572 [2024-05-13 03:06:15.114271] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:24.572 [2024-05-13 03:06:15.114291] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:24.572 [2024-05-13 03:06:15.114303] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.114311] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198d450) 00:25:24.572 [2024-05-13 03:06:15.114321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.572 [2024-05-13 03:06:15.114354] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4800, cid 0, qid 0 00:25:24.572 [2024-05-13 03:06:15.114365] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4960, cid 1, qid 0 00:25:24.572 [2024-05-13 03:06:15.114373] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4ac0, cid 2, qid 0 00:25:24.572 [2024-05-13 03:06:15.114381] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4c20, cid 3, qid 0 00:25:24.572 [2024-05-13 03:06:15.114388] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4d80, cid 4, qid 0 00:25:24.572 [2024-05-13 03:06:15.114612] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.572 [2024-05-13 03:06:15.114627] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.572 [2024-05-13 03:06:15.114634] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.114641] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4d80) on tqpair=0x198d450 00:25:24.572 [2024-05-13 03:06:15.114652] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:24.572 [2024-05-13 03:06:15.114661] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:24.572 [2024-05-13 03:06:15.114679] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.114688] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198d450) 00:25:24.572 [2024-05-13 03:06:15.114708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.572 [2024-05-13 03:06:15.114731] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4d80, cid 4, qid 0 00:25:24.572 [2024-05-13 03:06:15.114943] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.572 [2024-05-13 03:06:15.114956] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.572 [2024-05-13 03:06:15.114963] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.114969] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198d450): datao=0, datal=4096, cccid=4 00:25:24.572 [2024-05-13 03:06:15.114977] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19f4d80) on tqpair(0x198d450): expected_datao=0, payload_size=4096 00:25:24.572 [2024-05-13 03:06:15.114985] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115003] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115011] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115088] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.572 [2024-05-13 03:06:15.115099] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.572 [2024-05-13 03:06:15.115106] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115113] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4d80) on tqpair=0x198d450 00:25:24.572 [2024-05-13 03:06:15.115132] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:24.572 [2024-05-13 03:06:15.115172] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115182] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198d450) 00:25:24.572 [2024-05-13 03:06:15.115193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.572 [2024-05-13 03:06:15.115205] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115212] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115218] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x198d450) 00:25:24.572 [2024-05-13 03:06:15.115227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.572 [2024-05-13 03:06:15.115268] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4d80, cid 4, qid 0 00:25:24.572 [2024-05-13 03:06:15.115280] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4ee0, cid 5, qid 0 00:25:24.572 [2024-05-13 03:06:15.115541] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.572 [2024-05-13 03:06:15.115557] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.572 [2024-05-13 03:06:15.115564] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115570] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198d450): datao=0, datal=1024, cccid=4 00:25:24.572 [2024-05-13 03:06:15.115578] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19f4d80) on tqpair(0x198d450): expected_datao=0, payload_size=1024 00:25:24.572 [2024-05-13 03:06:15.115585] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115595] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115603] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115612] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.572 [2024-05-13 03:06:15.115620] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.572 [2024-05-13 03:06:15.115627] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.115634] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4ee0) on tqpair=0x198d450 00:25:24.572 [2024-05-13 03:06:15.155869] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.572 [2024-05-13 03:06:15.155889] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.572 [2024-05-13 03:06:15.155897] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.155910] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4d80) on tqpair=0x198d450 00:25:24.572 [2024-05-13 03:06:15.155930] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.155940] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198d450) 00:25:24.572 [2024-05-13 03:06:15.155951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.572 [2024-05-13 03:06:15.155981] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4d80, cid 4, qid 0 00:25:24.572 [2024-05-13 03:06:15.156194] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.572 [2024-05-13 03:06:15.156210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.572 [2024-05-13 03:06:15.156217] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.156223] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198d450): datao=0, datal=3072, cccid=4 00:25:24.572 [2024-05-13 03:06:15.156231] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19f4d80) on tqpair(0x198d450): expected_datao=0, payload_size=3072 00:25:24.572 [2024-05-13 03:06:15.156239] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.156310] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.156320] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.200727] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.572 [2024-05-13 03:06:15.200745] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.572 [2024-05-13 03:06:15.200752] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.200759] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4d80) on tqpair=0x198d450 00:25:24.572 [2024-05-13 03:06:15.200790] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.200799] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x198d450) 00:25:24.572 [2024-05-13 03:06:15.200810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.572 [2024-05-13 03:06:15.200840] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4d80, cid 4, qid 0 00:25:24.572 [2024-05-13 03:06:15.201037] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.572 [2024-05-13 03:06:15.201050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.572 [2024-05-13 03:06:15.201056] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.201063] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x198d450): datao=0, datal=8, cccid=4 00:25:24.572 [2024-05-13 03:06:15.201071] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19f4d80) on tqpair(0x198d450): expected_datao=0, payload_size=8 00:25:24.572 [2024-05-13 03:06:15.201078] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.201088] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.201096] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.241881] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.572 [2024-05-13 03:06:15.241910] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.572 [2024-05-13 03:06:15.241917] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.572 [2024-05-13 03:06:15.241924] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4d80) on tqpair=0x198d450 00:25:24.572 ===================================================== 00:25:24.572 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:24.572 ===================================================== 00:25:24.572 Controller Capabilities/Features 00:25:24.572 ================================ 00:25:24.572 Vendor ID: 0000 00:25:24.572 Subsystem Vendor ID: 0000 00:25:24.572 Serial Number: .................... 00:25:24.572 Model Number: ........................................ 00:25:24.572 Firmware Version: 24.05 00:25:24.572 Recommended Arb Burst: 0 00:25:24.572 IEEE OUI Identifier: 00 00 00 00:25:24.572 Multi-path I/O 00:25:24.572 May have multiple subsystem ports: No 00:25:24.572 May have multiple controllers: No 00:25:24.572 Associated with SR-IOV VF: No 00:25:24.572 Max Data Transfer Size: 131072 00:25:24.572 Max Number of Namespaces: 0 00:25:24.573 Max Number of I/O Queues: 1024 00:25:24.573 NVMe Specification Version (VS): 1.3 00:25:24.573 NVMe Specification Version (Identify): 1.3 00:25:24.573 Maximum Queue Entries: 128 00:25:24.573 Contiguous Queues Required: Yes 00:25:24.573 Arbitration Mechanisms Supported 00:25:24.573 Weighted Round Robin: Not Supported 00:25:24.573 Vendor Specific: Not Supported 00:25:24.573 Reset Timeout: 15000 ms 00:25:24.573 Doorbell Stride: 4 bytes 00:25:24.573 NVM Subsystem Reset: Not Supported 00:25:24.573 Command Sets Supported 00:25:24.573 NVM Command Set: Supported 00:25:24.573 Boot Partition: Not Supported 00:25:24.573 Memory Page Size Minimum: 4096 bytes 00:25:24.573 Memory Page Size Maximum: 4096 bytes 00:25:24.573 Persistent Memory Region: Not Supported 00:25:24.573 Optional Asynchronous Events Supported 00:25:24.573 Namespace Attribute Notices: Not Supported 00:25:24.573 Firmware Activation Notices: Not Supported 00:25:24.573 ANA Change Notices: Not Supported 00:25:24.573 PLE Aggregate Log Change Notices: Not Supported 00:25:24.573 LBA Status Info Alert Notices: Not Supported 00:25:24.573 EGE Aggregate Log Change Notices: Not Supported 00:25:24.573 Normal NVM Subsystem Shutdown event: Not Supported 00:25:24.573 Zone Descriptor Change Notices: Not Supported 00:25:24.573 Discovery Log Change Notices: Supported 00:25:24.573 Controller Attributes 00:25:24.573 128-bit Host Identifier: Not Supported 00:25:24.573 Non-Operational Permissive Mode: Not Supported 00:25:24.573 NVM Sets: Not Supported 00:25:24.573 Read Recovery Levels: Not Supported 00:25:24.573 Endurance Groups: Not Supported 00:25:24.573 Predictable Latency Mode: Not Supported 00:25:24.573 Traffic Based Keep ALive: Not Supported 00:25:24.573 Namespace Granularity: Not Supported 00:25:24.573 SQ Associations: Not Supported 00:25:24.573 UUID List: Not Supported 00:25:24.573 Multi-Domain Subsystem: Not Supported 00:25:24.573 Fixed Capacity Management: Not Supported 00:25:24.573 Variable Capacity Management: Not Supported 00:25:24.573 Delete Endurance Group: Not Supported 00:25:24.573 Delete NVM Set: Not Supported 00:25:24.573 Extended LBA Formats Supported: Not Supported 00:25:24.573 Flexible Data Placement Supported: Not Supported 00:25:24.573 00:25:24.573 Controller Memory Buffer Support 00:25:24.573 ================================ 00:25:24.573 Supported: No 00:25:24.573 00:25:24.573 Persistent Memory Region Support 00:25:24.573 ================================ 00:25:24.573 Supported: No 00:25:24.573 00:25:24.573 Admin Command Set Attributes 00:25:24.573 ============================ 00:25:24.573 Security Send/Receive: Not Supported 00:25:24.573 Format NVM: Not Supported 00:25:24.573 Firmware Activate/Download: Not Supported 00:25:24.573 Namespace Management: Not Supported 00:25:24.573 Device Self-Test: Not Supported 00:25:24.573 Directives: Not Supported 00:25:24.573 NVMe-MI: Not Supported 00:25:24.573 Virtualization Management: Not Supported 00:25:24.573 Doorbell Buffer Config: Not Supported 00:25:24.573 Get LBA Status Capability: Not Supported 00:25:24.573 Command & Feature Lockdown Capability: Not Supported 00:25:24.573 Abort Command Limit: 1 00:25:24.573 Async Event Request Limit: 4 00:25:24.573 Number of Firmware Slots: N/A 00:25:24.573 Firmware Slot 1 Read-Only: N/A 00:25:24.573 Firmware Activation Without Reset: N/A 00:25:24.573 Multiple Update Detection Support: N/A 00:25:24.573 Firmware Update Granularity: No Information Provided 00:25:24.573 Per-Namespace SMART Log: No 00:25:24.573 Asymmetric Namespace Access Log Page: Not Supported 00:25:24.573 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:24.573 Command Effects Log Page: Not Supported 00:25:24.573 Get Log Page Extended Data: Supported 00:25:24.573 Telemetry Log Pages: Not Supported 00:25:24.573 Persistent Event Log Pages: Not Supported 00:25:24.573 Supported Log Pages Log Page: May Support 00:25:24.573 Commands Supported & Effects Log Page: Not Supported 00:25:24.573 Feature Identifiers & Effects Log Page:May Support 00:25:24.573 NVMe-MI Commands & Effects Log Page: May Support 00:25:24.573 Data Area 4 for Telemetry Log: Not Supported 00:25:24.573 Error Log Page Entries Supported: 128 00:25:24.573 Keep Alive: Not Supported 00:25:24.573 00:25:24.573 NVM Command Set Attributes 00:25:24.573 ========================== 00:25:24.573 Submission Queue Entry Size 00:25:24.573 Max: 1 00:25:24.573 Min: 1 00:25:24.573 Completion Queue Entry Size 00:25:24.573 Max: 1 00:25:24.573 Min: 1 00:25:24.573 Number of Namespaces: 0 00:25:24.573 Compare Command: Not Supported 00:25:24.573 Write Uncorrectable Command: Not Supported 00:25:24.573 Dataset Management Command: Not Supported 00:25:24.573 Write Zeroes Command: Not Supported 00:25:24.573 Set Features Save Field: Not Supported 00:25:24.573 Reservations: Not Supported 00:25:24.573 Timestamp: Not Supported 00:25:24.573 Copy: Not Supported 00:25:24.573 Volatile Write Cache: Not Present 00:25:24.573 Atomic Write Unit (Normal): 1 00:25:24.573 Atomic Write Unit (PFail): 1 00:25:24.573 Atomic Compare & Write Unit: 1 00:25:24.573 Fused Compare & Write: Supported 00:25:24.573 Scatter-Gather List 00:25:24.573 SGL Command Set: Supported 00:25:24.573 SGL Keyed: Supported 00:25:24.573 SGL Bit Bucket Descriptor: Not Supported 00:25:24.573 SGL Metadata Pointer: Not Supported 00:25:24.573 Oversized SGL: Not Supported 00:25:24.573 SGL Metadata Address: Not Supported 00:25:24.573 SGL Offset: Supported 00:25:24.573 Transport SGL Data Block: Not Supported 00:25:24.573 Replay Protected Memory Block: Not Supported 00:25:24.573 00:25:24.573 Firmware Slot Information 00:25:24.573 ========================= 00:25:24.573 Active slot: 0 00:25:24.573 00:25:24.573 00:25:24.573 Error Log 00:25:24.573 ========= 00:25:24.573 00:25:24.573 Active Namespaces 00:25:24.573 ================= 00:25:24.573 Discovery Log Page 00:25:24.573 ================== 00:25:24.573 Generation Counter: 2 00:25:24.573 Number of Records: 2 00:25:24.573 Record Format: 0 00:25:24.573 00:25:24.573 Discovery Log Entry 0 00:25:24.573 ---------------------- 00:25:24.573 Transport Type: 3 (TCP) 00:25:24.573 Address Family: 1 (IPv4) 00:25:24.573 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:24.573 Entry Flags: 00:25:24.573 Duplicate Returned Information: 1 00:25:24.573 Explicit Persistent Connection Support for Discovery: 1 00:25:24.573 Transport Requirements: 00:25:24.573 Secure Channel: Not Required 00:25:24.573 Port ID: 0 (0x0000) 00:25:24.573 Controller ID: 65535 (0xffff) 00:25:24.573 Admin Max SQ Size: 128 00:25:24.573 Transport Service Identifier: 4420 00:25:24.573 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:24.573 Transport Address: 10.0.0.2 00:25:24.573 Discovery Log Entry 1 00:25:24.573 ---------------------- 00:25:24.573 Transport Type: 3 (TCP) 00:25:24.573 Address Family: 1 (IPv4) 00:25:24.573 Subsystem Type: 2 (NVM Subsystem) 00:25:24.573 Entry Flags: 00:25:24.573 Duplicate Returned Information: 0 00:25:24.573 Explicit Persistent Connection Support for Discovery: 0 00:25:24.573 Transport Requirements: 00:25:24.573 Secure Channel: Not Required 00:25:24.573 Port ID: 0 (0x0000) 00:25:24.573 Controller ID: 65535 (0xffff) 00:25:24.573 Admin Max SQ Size: 128 00:25:24.573 Transport Service Identifier: 4420 00:25:24.573 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:24.573 Transport Address: 10.0.0.2 [2024-05-13 03:06:15.242046] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:24.573 [2024-05-13 03:06:15.242072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.573 [2024-05-13 03:06:15.242084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.573 [2024-05-13 03:06:15.242097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.573 [2024-05-13 03:06:15.242107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.573 [2024-05-13 03:06:15.242121] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.573 [2024-05-13 03:06:15.242129] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.573 [2024-05-13 03:06:15.242136] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198d450) 00:25:24.573 [2024-05-13 03:06:15.242147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.573 [2024-05-13 03:06:15.242172] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4c20, cid 3, qid 0 00:25:24.573 [2024-05-13 03:06:15.242343] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.573 [2024-05-13 03:06:15.242355] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.573 [2024-05-13 03:06:15.242362] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.573 [2024-05-13 03:06:15.242369] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4c20) on tqpair=0x198d450 00:25:24.573 [2024-05-13 03:06:15.242382] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.573 [2024-05-13 03:06:15.242390] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.242397] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198d450) 00:25:24.574 [2024-05-13 03:06:15.242407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-05-13 03:06:15.242433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4c20, cid 3, qid 0 00:25:24.574 [2024-05-13 03:06:15.242622] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.574 [2024-05-13 03:06:15.242635] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.574 [2024-05-13 03:06:15.242641] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.242648] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4c20) on tqpair=0x198d450 00:25:24.574 [2024-05-13 03:06:15.242658] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:24.574 [2024-05-13 03:06:15.242667] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:24.574 [2024-05-13 03:06:15.242683] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.242692] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.242707] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198d450) 00:25:24.574 [2024-05-13 03:06:15.242718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-05-13 03:06:15.242740] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4c20, cid 3, qid 0 00:25:24.574 [2024-05-13 03:06:15.242941] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.574 [2024-05-13 03:06:15.242956] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.574 [2024-05-13 03:06:15.242963] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.242969] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4c20) on tqpair=0x198d450 00:25:24.574 [2024-05-13 03:06:15.242988] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.242998] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.243004] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198d450) 00:25:24.574 [2024-05-13 03:06:15.243015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-05-13 03:06:15.243040] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4c20, cid 3, qid 0 00:25:24.574 [2024-05-13 03:06:15.243219] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.574 [2024-05-13 03:06:15.243231] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.574 [2024-05-13 03:06:15.243238] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.243244] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4c20) on tqpair=0x198d450 00:25:24.574 [2024-05-13 03:06:15.243261] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.243271] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.243277] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198d450) 00:25:24.574 [2024-05-13 03:06:15.243288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-05-13 03:06:15.243308] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4c20, cid 3, qid 0 00:25:24.574 [2024-05-13 03:06:15.243485] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.574 [2024-05-13 03:06:15.243500] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.574 [2024-05-13 03:06:15.243507] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.243514] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4c20) on tqpair=0x198d450 00:25:24.574 [2024-05-13 03:06:15.243532] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.243542] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.243548] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198d450) 00:25:24.574 [2024-05-13 03:06:15.243559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-05-13 03:06:15.243579] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4c20, cid 3, qid 0 00:25:24.574 [2024-05-13 03:06:15.247707] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.574 [2024-05-13 03:06:15.247724] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.574 [2024-05-13 03:06:15.247731] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.247738] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4c20) on tqpair=0x198d450 00:25:24.574 [2024-05-13 03:06:15.247757] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.247782] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.247788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x198d450) 00:25:24.574 [2024-05-13 03:06:15.247799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-05-13 03:06:15.247822] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19f4c20, cid 3, qid 0 00:25:24.574 [2024-05-13 03:06:15.248018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.574 [2024-05-13 03:06:15.248032] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.574 [2024-05-13 03:06:15.248039] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.248046] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19f4c20) on tqpair=0x198d450 00:25:24.574 [2024-05-13 03:06:15.248061] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:25:24.574 00:25:24.574 03:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:24.574 [2024-05-13 03:06:15.278625] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:25:24.574 [2024-05-13 03:06:15.278666] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425084 ] 00:25:24.574 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.574 [2024-05-13 03:06:15.294960] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:24.574 [2024-05-13 03:06:15.312417] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:24.574 [2024-05-13 03:06:15.312463] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:24.574 [2024-05-13 03:06:15.312472] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:24.574 [2024-05-13 03:06:15.312488] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:24.574 [2024-05-13 03:06:15.312499] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:24.574 [2024-05-13 03:06:15.312768] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:24.574 [2024-05-13 03:06:15.312812] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5c0450 0 00:25:24.574 [2024-05-13 03:06:15.319728] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:24.574 [2024-05-13 03:06:15.319746] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:24.574 [2024-05-13 03:06:15.319753] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:24.574 [2024-05-13 03:06:15.319759] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:24.574 [2024-05-13 03:06:15.319811] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.319824] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.319831] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.574 [2024-05-13 03:06:15.319845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:24.574 [2024-05-13 03:06:15.319871] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.574 [2024-05-13 03:06:15.330714] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.574 [2024-05-13 03:06:15.330731] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.574 [2024-05-13 03:06:15.330738] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.330745] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627800) on tqpair=0x5c0450 00:25:24.574 [2024-05-13 03:06:15.330777] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:24.574 [2024-05-13 03:06:15.330788] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:24.574 [2024-05-13 03:06:15.330798] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:24.574 [2024-05-13 03:06:15.330815] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.330823] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.330830] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.574 [2024-05-13 03:06:15.330842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-05-13 03:06:15.330865] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.574 [2024-05-13 03:06:15.331105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.574 [2024-05-13 03:06:15.331118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.574 [2024-05-13 03:06:15.331125] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.331132] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627800) on tqpair=0x5c0450 00:25:24.574 [2024-05-13 03:06:15.331140] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:24.574 [2024-05-13 03:06:15.331153] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:24.574 [2024-05-13 03:06:15.331180] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.331188] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.331195] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.574 [2024-05-13 03:06:15.331205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-05-13 03:06:15.331226] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.574 [2024-05-13 03:06:15.331471] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.574 [2024-05-13 03:06:15.331483] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.574 [2024-05-13 03:06:15.331490] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.574 [2024-05-13 03:06:15.331497] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627800) on tqpair=0x5c0450 00:25:24.575 [2024-05-13 03:06:15.331506] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:24.575 [2024-05-13 03:06:15.331519] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:24.575 [2024-05-13 03:06:15.331532] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.331554] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.331561] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.331571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-05-13 03:06:15.331592] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.575 [2024-05-13 03:06:15.331802] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.575 [2024-05-13 03:06:15.331818] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.575 [2024-05-13 03:06:15.331825] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.331832] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627800) on tqpair=0x5c0450 00:25:24.575 [2024-05-13 03:06:15.331841] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:24.575 [2024-05-13 03:06:15.331858] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.331867] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.331874] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.331885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-05-13 03:06:15.331907] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.575 [2024-05-13 03:06:15.332137] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.575 [2024-05-13 03:06:15.332152] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.575 [2024-05-13 03:06:15.332158] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.332169] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627800) on tqpair=0x5c0450 00:25:24.575 [2024-05-13 03:06:15.332178] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:24.575 [2024-05-13 03:06:15.332186] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:24.575 [2024-05-13 03:06:15.332200] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:24.575 [2024-05-13 03:06:15.332325] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:24.575 [2024-05-13 03:06:15.332332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:24.575 [2024-05-13 03:06:15.332344] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.332351] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.332358] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.332368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-05-13 03:06:15.332389] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.575 [2024-05-13 03:06:15.332635] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.575 [2024-05-13 03:06:15.332650] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.575 [2024-05-13 03:06:15.332657] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.332664] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627800) on tqpair=0x5c0450 00:25:24.575 [2024-05-13 03:06:15.332672] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:24.575 [2024-05-13 03:06:15.332689] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.332708] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.332715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.332726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-05-13 03:06:15.332747] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.575 [2024-05-13 03:06:15.332977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.575 [2024-05-13 03:06:15.332989] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.575 [2024-05-13 03:06:15.332996] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.333003] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627800) on tqpair=0x5c0450 00:25:24.575 [2024-05-13 03:06:15.333010] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:24.575 [2024-05-13 03:06:15.333019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:24.575 [2024-05-13 03:06:15.333032] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:24.575 [2024-05-13 03:06:15.333065] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:24.575 [2024-05-13 03:06:15.333079] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.333086] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.333097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-05-13 03:06:15.333121] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.575 [2024-05-13 03:06:15.333352] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.575 [2024-05-13 03:06:15.333368] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.575 [2024-05-13 03:06:15.333375] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.333381] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c0450): datao=0, datal=4096, cccid=0 00:25:24.575 [2024-05-13 03:06:15.333389] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x627800) on tqpair(0x5c0450): expected_datao=0, payload_size=4096 00:25:24.575 [2024-05-13 03:06:15.333397] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.333479] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.333504] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.333680] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.575 [2024-05-13 03:06:15.333702] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.575 [2024-05-13 03:06:15.333710] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.333717] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627800) on tqpair=0x5c0450 00:25:24.575 [2024-05-13 03:06:15.333728] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:24.575 [2024-05-13 03:06:15.333737] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:24.575 [2024-05-13 03:06:15.333744] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:24.575 [2024-05-13 03:06:15.333751] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:24.575 [2024-05-13 03:06:15.333759] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:24.575 [2024-05-13 03:06:15.333767] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:24.575 [2024-05-13 03:06:15.333781] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:24.575 [2024-05-13 03:06:15.333798] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.333806] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.333813] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.333824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.575 [2024-05-13 03:06:15.333846] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.575 [2024-05-13 03:06:15.334035] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.575 [2024-05-13 03:06:15.334050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.575 [2024-05-13 03:06:15.334057] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334063] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627800) on tqpair=0x5c0450 00:25:24.575 [2024-05-13 03:06:15.334074] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334081] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334088] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.334098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.575 [2024-05-13 03:06:15.334108] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334115] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334125] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.334134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.575 [2024-05-13 03:06:15.334159] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334166] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334172] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.334181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.575 [2024-05-13 03:06:15.334190] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334196] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334202] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.575 [2024-05-13 03:06:15.334211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.575 [2024-05-13 03:06:15.334219] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:24.575 [2024-05-13 03:06:15.334238] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:24.575 [2024-05-13 03:06:15.334250] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.575 [2024-05-13 03:06:15.334257] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c0450) 00:25:24.576 [2024-05-13 03:06:15.334267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.576 [2024-05-13 03:06:15.334288] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627800, cid 0, qid 0 00:25:24.576 [2024-05-13 03:06:15.334315] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627960, cid 1, qid 0 00:25:24.576 [2024-05-13 03:06:15.334323] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627ac0, cid 2, qid 0 00:25:24.576 [2024-05-13 03:06:15.334331] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.576 [2024-05-13 03:06:15.334338] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627d80, cid 4, qid 0 00:25:24.576 [2024-05-13 03:06:15.334567] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.576 [2024-05-13 03:06:15.334579] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.576 [2024-05-13 03:06:15.334586] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.334593] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627d80) on tqpair=0x5c0450 00:25:24.576 [2024-05-13 03:06:15.334616] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:24.576 [2024-05-13 03:06:15.334625] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:24.576 [2024-05-13 03:06:15.334643] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:24.576 [2024-05-13 03:06:15.334655] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:24.576 [2024-05-13 03:06:15.334666] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.334673] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.338703] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c0450) 00:25:24.576 [2024-05-13 03:06:15.338720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.576 [2024-05-13 03:06:15.338747] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627d80, cid 4, qid 0 00:25:24.576 [2024-05-13 03:06:15.338981] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.576 [2024-05-13 03:06:15.338993] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.576 [2024-05-13 03:06:15.339000] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339007] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627d80) on tqpair=0x5c0450 00:25:24.576 [2024-05-13 03:06:15.339079] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:24.576 [2024-05-13 03:06:15.339099] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:24.576 [2024-05-13 03:06:15.339114] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339122] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c0450) 00:25:24.576 [2024-05-13 03:06:15.339132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.576 [2024-05-13 03:06:15.339153] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627d80, cid 4, qid 0 00:25:24.576 [2024-05-13 03:06:15.339372] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.576 [2024-05-13 03:06:15.339384] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.576 [2024-05-13 03:06:15.339391] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339397] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c0450): datao=0, datal=4096, cccid=4 00:25:24.576 [2024-05-13 03:06:15.339405] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x627d80) on tqpair(0x5c0450): expected_datao=0, payload_size=4096 00:25:24.576 [2024-05-13 03:06:15.339413] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339423] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339431] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339516] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.576 [2024-05-13 03:06:15.339527] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.576 [2024-05-13 03:06:15.339534] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339540] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627d80) on tqpair=0x5c0450 00:25:24.576 [2024-05-13 03:06:15.339562] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:24.576 [2024-05-13 03:06:15.339582] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:24.576 [2024-05-13 03:06:15.339600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:24.576 [2024-05-13 03:06:15.339613] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339621] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c0450) 00:25:24.576 [2024-05-13 03:06:15.339632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.576 [2024-05-13 03:06:15.339653] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627d80, cid 4, qid 0 00:25:24.576 [2024-05-13 03:06:15.339867] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.576 [2024-05-13 03:06:15.339880] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.576 [2024-05-13 03:06:15.339887] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339893] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c0450): datao=0, datal=4096, cccid=4 00:25:24.576 [2024-05-13 03:06:15.339905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x627d80) on tqpair(0x5c0450): expected_datao=0, payload_size=4096 00:25:24.576 [2024-05-13 03:06:15.339913] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339980] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.576 [2024-05-13 03:06:15.339989] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.380869] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.837 [2024-05-13 03:06:15.380887] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.837 [2024-05-13 03:06:15.380895] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.380902] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627d80) on tqpair=0x5c0450 00:25:24.837 [2024-05-13 03:06:15.380926] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:24.837 [2024-05-13 03:06:15.380946] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:24.837 [2024-05-13 03:06:15.380960] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.380968] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c0450) 00:25:24.837 [2024-05-13 03:06:15.380979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.837 [2024-05-13 03:06:15.381002] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627d80, cid 4, qid 0 00:25:24.837 [2024-05-13 03:06:15.381205] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.837 [2024-05-13 03:06:15.381220] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.837 [2024-05-13 03:06:15.381227] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381234] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c0450): datao=0, datal=4096, cccid=4 00:25:24.837 [2024-05-13 03:06:15.381241] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x627d80) on tqpair(0x5c0450): expected_datao=0, payload_size=4096 00:25:24.837 [2024-05-13 03:06:15.381249] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381259] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381267] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381356] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.837 [2024-05-13 03:06:15.381367] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.837 [2024-05-13 03:06:15.381374] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381380] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627d80) on tqpair=0x5c0450 00:25:24.837 [2024-05-13 03:06:15.381395] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:24.837 [2024-05-13 03:06:15.381410] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:24.837 [2024-05-13 03:06:15.381434] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:24.837 [2024-05-13 03:06:15.381445] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:24.837 [2024-05-13 03:06:15.381454] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:24.837 [2024-05-13 03:06:15.381463] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:24.837 [2024-05-13 03:06:15.381474] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:24.837 [2024-05-13 03:06:15.381484] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:24.837 [2024-05-13 03:06:15.381507] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381516] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c0450) 00:25:24.837 [2024-05-13 03:06:15.381527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.837 [2024-05-13 03:06:15.381538] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381545] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381566] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5c0450) 00:25:24.837 [2024-05-13 03:06:15.381576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.837 [2024-05-13 03:06:15.381600] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627d80, cid 4, qid 0 00:25:24.837 [2024-05-13 03:06:15.381612] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627ee0, cid 5, qid 0 00:25:24.837 [2024-05-13 03:06:15.381834] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.837 [2024-05-13 03:06:15.381848] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.837 [2024-05-13 03:06:15.381855] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627d80) on tqpair=0x5c0450 00:25:24.837 [2024-05-13 03:06:15.381873] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.837 [2024-05-13 03:06:15.381882] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.837 [2024-05-13 03:06:15.381888] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381895] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627ee0) on tqpair=0x5c0450 00:25:24.837 [2024-05-13 03:06:15.381911] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.837 [2024-05-13 03:06:15.381920] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5c0450) 00:25:24.837 [2024-05-13 03:06:15.381930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.837 [2024-05-13 03:06:15.381952] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627ee0, cid 5, qid 0 00:25:24.837 [2024-05-13 03:06:15.382138] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.837 [2024-05-13 03:06:15.382153] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.838 [2024-05-13 03:06:15.382160] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.382167] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627ee0) on tqpair=0x5c0450 00:25:24.838 [2024-05-13 03:06:15.382183] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.382192] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5c0450) 00:25:24.838 [2024-05-13 03:06:15.382202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.838 [2024-05-13 03:06:15.382223] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627ee0, cid 5, qid 0 00:25:24.838 [2024-05-13 03:06:15.382417] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.838 [2024-05-13 03:06:15.382429] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.838 [2024-05-13 03:06:15.382436] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.382443] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627ee0) on tqpair=0x5c0450 00:25:24.838 [2024-05-13 03:06:15.382458] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.382471] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5c0450) 00:25:24.838 [2024-05-13 03:06:15.382482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.838 [2024-05-13 03:06:15.382518] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627ee0, cid 5, qid 0 00:25:24.838 [2024-05-13 03:06:15.386711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.838 [2024-05-13 03:06:15.386727] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.838 [2024-05-13 03:06:15.386734] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.386740] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627ee0) on tqpair=0x5c0450 00:25:24.838 [2024-05-13 03:06:15.386775] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.386785] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5c0450) 00:25:24.838 [2024-05-13 03:06:15.386796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.838 [2024-05-13 03:06:15.386808] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.386815] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c0450) 00:25:24.838 [2024-05-13 03:06:15.386824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.838 [2024-05-13 03:06:15.386835] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.386842] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5c0450) 00:25:24.838 [2024-05-13 03:06:15.386852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.838 [2024-05-13 03:06:15.386863] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.386871] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5c0450) 00:25:24.838 [2024-05-13 03:06:15.386880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.838 [2024-05-13 03:06:15.386903] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627ee0, cid 5, qid 0 00:25:24.838 [2024-05-13 03:06:15.386913] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627d80, cid 4, qid 0 00:25:24.838 [2024-05-13 03:06:15.386921] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x628040, cid 6, qid 0 00:25:24.838 [2024-05-13 03:06:15.386929] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6281a0, cid 7, qid 0 00:25:24.838 [2024-05-13 03:06:15.387194] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.838 [2024-05-13 03:06:15.387210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.838 [2024-05-13 03:06:15.387217] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387223] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c0450): datao=0, datal=8192, cccid=5 00:25:24.838 [2024-05-13 03:06:15.387231] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x627ee0) on tqpair(0x5c0450): expected_datao=0, payload_size=8192 00:25:24.838 [2024-05-13 03:06:15.387239] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387429] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387439] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387448] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.838 [2024-05-13 03:06:15.387457] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.838 [2024-05-13 03:06:15.387468] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387475] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c0450): datao=0, datal=512, cccid=4 00:25:24.838 [2024-05-13 03:06:15.387483] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x627d80) on tqpair(0x5c0450): expected_datao=0, payload_size=512 00:25:24.838 [2024-05-13 03:06:15.387490] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387500] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387507] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387516] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.838 [2024-05-13 03:06:15.387524] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.838 [2024-05-13 03:06:15.387531] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387537] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c0450): datao=0, datal=512, cccid=6 00:25:24.838 [2024-05-13 03:06:15.387545] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x628040) on tqpair(0x5c0450): expected_datao=0, payload_size=512 00:25:24.838 [2024-05-13 03:06:15.387552] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387561] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387569] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387577] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.838 [2024-05-13 03:06:15.387586] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.838 [2024-05-13 03:06:15.387592] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387599] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c0450): datao=0, datal=4096, cccid=7 00:25:24.838 [2024-05-13 03:06:15.387606] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6281a0) on tqpair(0x5c0450): expected_datao=0, payload_size=4096 00:25:24.838 [2024-05-13 03:06:15.387613] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387623] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387630] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.838 [2024-05-13 03:06:15.387651] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.838 [2024-05-13 03:06:15.387658] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627ee0) on tqpair=0x5c0450 00:25:24.838 [2024-05-13 03:06:15.387684] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.838 [2024-05-13 03:06:15.387710] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.838 [2024-05-13 03:06:15.387720] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387727] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627d80) on tqpair=0x5c0450 00:25:24.838 [2024-05-13 03:06:15.387741] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.838 [2024-05-13 03:06:15.387752] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.838 [2024-05-13 03:06:15.387759] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387765] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x628040) on tqpair=0x5c0450 00:25:24.838 [2024-05-13 03:06:15.387779] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.838 [2024-05-13 03:06:15.387789] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.838 [2024-05-13 03:06:15.387796] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.838 [2024-05-13 03:06:15.387802] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6281a0) on tqpair=0x5c0450 00:25:24.838 ===================================================== 00:25:24.838 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:24.838 ===================================================== 00:25:24.838 Controller Capabilities/Features 00:25:24.838 ================================ 00:25:24.838 Vendor ID: 8086 00:25:24.838 Subsystem Vendor ID: 8086 00:25:24.838 Serial Number: SPDK00000000000001 00:25:24.838 Model Number: SPDK bdev Controller 00:25:24.838 Firmware Version: 24.05 00:25:24.838 Recommended Arb Burst: 6 00:25:24.838 IEEE OUI Identifier: e4 d2 5c 00:25:24.838 Multi-path I/O 00:25:24.838 May have multiple subsystem ports: Yes 00:25:24.838 May have multiple controllers: Yes 00:25:24.838 Associated with SR-IOV VF: No 00:25:24.838 Max Data Transfer Size: 131072 00:25:24.838 Max Number of Namespaces: 32 00:25:24.838 Max Number of I/O Queues: 127 00:25:24.838 NVMe Specification Version (VS): 1.3 00:25:24.838 NVMe Specification Version (Identify): 1.3 00:25:24.838 Maximum Queue Entries: 128 00:25:24.838 Contiguous Queues Required: Yes 00:25:24.838 Arbitration Mechanisms Supported 00:25:24.838 Weighted Round Robin: Not Supported 00:25:24.838 Vendor Specific: Not Supported 00:25:24.838 Reset Timeout: 15000 ms 00:25:24.838 Doorbell Stride: 4 bytes 00:25:24.838 NVM Subsystem Reset: Not Supported 00:25:24.838 Command Sets Supported 00:25:24.838 NVM Command Set: Supported 00:25:24.838 Boot Partition: Not Supported 00:25:24.838 Memory Page Size Minimum: 4096 bytes 00:25:24.838 Memory Page Size Maximum: 4096 bytes 00:25:24.838 Persistent Memory Region: Not Supported 00:25:24.838 Optional Asynchronous Events Supported 00:25:24.838 Namespace Attribute Notices: Supported 00:25:24.838 Firmware Activation Notices: Not Supported 00:25:24.838 ANA Change Notices: Not Supported 00:25:24.838 PLE Aggregate Log Change Notices: Not Supported 00:25:24.838 LBA Status Info Alert Notices: Not Supported 00:25:24.838 EGE Aggregate Log Change Notices: Not Supported 00:25:24.838 Normal NVM Subsystem Shutdown event: Not Supported 00:25:24.838 Zone Descriptor Change Notices: Not Supported 00:25:24.838 Discovery Log Change Notices: Not Supported 00:25:24.838 Controller Attributes 00:25:24.838 128-bit Host Identifier: Supported 00:25:24.839 Non-Operational Permissive Mode: Not Supported 00:25:24.839 NVM Sets: Not Supported 00:25:24.839 Read Recovery Levels: Not Supported 00:25:24.839 Endurance Groups: Not Supported 00:25:24.839 Predictable Latency Mode: Not Supported 00:25:24.839 Traffic Based Keep ALive: Not Supported 00:25:24.839 Namespace Granularity: Not Supported 00:25:24.839 SQ Associations: Not Supported 00:25:24.839 UUID List: Not Supported 00:25:24.839 Multi-Domain Subsystem: Not Supported 00:25:24.839 Fixed Capacity Management: Not Supported 00:25:24.839 Variable Capacity Management: Not Supported 00:25:24.839 Delete Endurance Group: Not Supported 00:25:24.839 Delete NVM Set: Not Supported 00:25:24.839 Extended LBA Formats Supported: Not Supported 00:25:24.839 Flexible Data Placement Supported: Not Supported 00:25:24.839 00:25:24.839 Controller Memory Buffer Support 00:25:24.839 ================================ 00:25:24.839 Supported: No 00:25:24.839 00:25:24.839 Persistent Memory Region Support 00:25:24.839 ================================ 00:25:24.839 Supported: No 00:25:24.839 00:25:24.839 Admin Command Set Attributes 00:25:24.839 ============================ 00:25:24.839 Security Send/Receive: Not Supported 00:25:24.839 Format NVM: Not Supported 00:25:24.839 Firmware Activate/Download: Not Supported 00:25:24.839 Namespace Management: Not Supported 00:25:24.839 Device Self-Test: Not Supported 00:25:24.839 Directives: Not Supported 00:25:24.839 NVMe-MI: Not Supported 00:25:24.839 Virtualization Management: Not Supported 00:25:24.839 Doorbell Buffer Config: Not Supported 00:25:24.839 Get LBA Status Capability: Not Supported 00:25:24.839 Command & Feature Lockdown Capability: Not Supported 00:25:24.839 Abort Command Limit: 4 00:25:24.839 Async Event Request Limit: 4 00:25:24.839 Number of Firmware Slots: N/A 00:25:24.839 Firmware Slot 1 Read-Only: N/A 00:25:24.839 Firmware Activation Without Reset: N/A 00:25:24.839 Multiple Update Detection Support: N/A 00:25:24.839 Firmware Update Granularity: No Information Provided 00:25:24.839 Per-Namespace SMART Log: No 00:25:24.839 Asymmetric Namespace Access Log Page: Not Supported 00:25:24.839 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:24.839 Command Effects Log Page: Supported 00:25:24.839 Get Log Page Extended Data: Supported 00:25:24.839 Telemetry Log Pages: Not Supported 00:25:24.839 Persistent Event Log Pages: Not Supported 00:25:24.839 Supported Log Pages Log Page: May Support 00:25:24.839 Commands Supported & Effects Log Page: Not Supported 00:25:24.839 Feature Identifiers & Effects Log Page:May Support 00:25:24.839 NVMe-MI Commands & Effects Log Page: May Support 00:25:24.839 Data Area 4 for Telemetry Log: Not Supported 00:25:24.839 Error Log Page Entries Supported: 128 00:25:24.839 Keep Alive: Supported 00:25:24.839 Keep Alive Granularity: 10000 ms 00:25:24.839 00:25:24.839 NVM Command Set Attributes 00:25:24.839 ========================== 00:25:24.839 Submission Queue Entry Size 00:25:24.839 Max: 64 00:25:24.839 Min: 64 00:25:24.839 Completion Queue Entry Size 00:25:24.839 Max: 16 00:25:24.839 Min: 16 00:25:24.839 Number of Namespaces: 32 00:25:24.839 Compare Command: Supported 00:25:24.839 Write Uncorrectable Command: Not Supported 00:25:24.839 Dataset Management Command: Supported 00:25:24.839 Write Zeroes Command: Supported 00:25:24.839 Set Features Save Field: Not Supported 00:25:24.839 Reservations: Supported 00:25:24.839 Timestamp: Not Supported 00:25:24.839 Copy: Supported 00:25:24.839 Volatile Write Cache: Present 00:25:24.839 Atomic Write Unit (Normal): 1 00:25:24.839 Atomic Write Unit (PFail): 1 00:25:24.839 Atomic Compare & Write Unit: 1 00:25:24.839 Fused Compare & Write: Supported 00:25:24.839 Scatter-Gather List 00:25:24.839 SGL Command Set: Supported 00:25:24.839 SGL Keyed: Supported 00:25:24.839 SGL Bit Bucket Descriptor: Not Supported 00:25:24.839 SGL Metadata Pointer: Not Supported 00:25:24.839 Oversized SGL: Not Supported 00:25:24.839 SGL Metadata Address: Not Supported 00:25:24.839 SGL Offset: Supported 00:25:24.839 Transport SGL Data Block: Not Supported 00:25:24.839 Replay Protected Memory Block: Not Supported 00:25:24.839 00:25:24.839 Firmware Slot Information 00:25:24.839 ========================= 00:25:24.839 Active slot: 1 00:25:24.839 Slot 1 Firmware Revision: 24.05 00:25:24.839 00:25:24.839 00:25:24.839 Commands Supported and Effects 00:25:24.839 ============================== 00:25:24.839 Admin Commands 00:25:24.839 -------------- 00:25:24.839 Get Log Page (02h): Supported 00:25:24.839 Identify (06h): Supported 00:25:24.839 Abort (08h): Supported 00:25:24.839 Set Features (09h): Supported 00:25:24.839 Get Features (0Ah): Supported 00:25:24.839 Asynchronous Event Request (0Ch): Supported 00:25:24.839 Keep Alive (18h): Supported 00:25:24.839 I/O Commands 00:25:24.839 ------------ 00:25:24.839 Flush (00h): Supported LBA-Change 00:25:24.839 Write (01h): Supported LBA-Change 00:25:24.839 Read (02h): Supported 00:25:24.839 Compare (05h): Supported 00:25:24.839 Write Zeroes (08h): Supported LBA-Change 00:25:24.839 Dataset Management (09h): Supported LBA-Change 00:25:24.839 Copy (19h): Supported LBA-Change 00:25:24.839 Unknown (79h): Supported LBA-Change 00:25:24.839 Unknown (7Ah): Supported 00:25:24.839 00:25:24.839 Error Log 00:25:24.839 ========= 00:25:24.839 00:25:24.839 Arbitration 00:25:24.839 =========== 00:25:24.839 Arbitration Burst: 1 00:25:24.839 00:25:24.839 Power Management 00:25:24.839 ================ 00:25:24.839 Number of Power States: 1 00:25:24.839 Current Power State: Power State #0 00:25:24.839 Power State #0: 00:25:24.839 Max Power: 0.00 W 00:25:24.839 Non-Operational State: Operational 00:25:24.839 Entry Latency: Not Reported 00:25:24.839 Exit Latency: Not Reported 00:25:24.839 Relative Read Throughput: 0 00:25:24.839 Relative Read Latency: 0 00:25:24.839 Relative Write Throughput: 0 00:25:24.839 Relative Write Latency: 0 00:25:24.839 Idle Power: Not Reported 00:25:24.839 Active Power: Not Reported 00:25:24.839 Non-Operational Permissive Mode: Not Supported 00:25:24.839 00:25:24.839 Health Information 00:25:24.839 ================== 00:25:24.839 Critical Warnings: 00:25:24.839 Available Spare Space: OK 00:25:24.839 Temperature: OK 00:25:24.839 Device Reliability: OK 00:25:24.839 Read Only: No 00:25:24.839 Volatile Memory Backup: OK 00:25:24.839 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:24.839 Temperature Threshold: [2024-05-13 03:06:15.387920] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.839 [2024-05-13 03:06:15.387934] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5c0450) 00:25:24.839 [2024-05-13 03:06:15.387946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.839 [2024-05-13 03:06:15.387969] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6281a0, cid 7, qid 0 00:25:24.839 [2024-05-13 03:06:15.388166] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.839 [2024-05-13 03:06:15.388178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.839 [2024-05-13 03:06:15.388185] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.839 [2024-05-13 03:06:15.388192] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6281a0) on tqpair=0x5c0450 00:25:24.839 [2024-05-13 03:06:15.388236] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:24.839 [2024-05-13 03:06:15.388258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.839 [2024-05-13 03:06:15.388285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.839 [2024-05-13 03:06:15.388295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.839 [2024-05-13 03:06:15.388304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.839 [2024-05-13 03:06:15.388316] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.839 [2024-05-13 03:06:15.388323] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.839 [2024-05-13 03:06:15.388330] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.839 [2024-05-13 03:06:15.388340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.839 [2024-05-13 03:06:15.388362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.839 [2024-05-13 03:06:15.388559] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.839 [2024-05-13 03:06:15.388574] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.839 [2024-05-13 03:06:15.388581] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.839 [2024-05-13 03:06:15.388588] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627c20) on tqpair=0x5c0450 00:25:24.839 [2024-05-13 03:06:15.388600] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.839 [2024-05-13 03:06:15.388608] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.839 [2024-05-13 03:06:15.388614] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.839 [2024-05-13 03:06:15.388625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.839 [2024-05-13 03:06:15.388651] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.839 [2024-05-13 03:06:15.388856] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.840 [2024-05-13 03:06:15.388871] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.840 [2024-05-13 03:06:15.388878] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.388885] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627c20) on tqpair=0x5c0450 00:25:24.840 [2024-05-13 03:06:15.388893] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:24.840 [2024-05-13 03:06:15.388901] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:24.840 [2024-05-13 03:06:15.388917] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.388926] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.388937] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.840 [2024-05-13 03:06:15.388948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.840 [2024-05-13 03:06:15.388969] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.840 [2024-05-13 03:06:15.389165] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.840 [2024-05-13 03:06:15.389177] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.840 [2024-05-13 03:06:15.389184] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.389190] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627c20) on tqpair=0x5c0450 00:25:24.840 [2024-05-13 03:06:15.389206] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.389216] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.389222] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.840 [2024-05-13 03:06:15.389232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.840 [2024-05-13 03:06:15.389252] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.840 [2024-05-13 03:06:15.389431] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.840 [2024-05-13 03:06:15.389446] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.840 [2024-05-13 03:06:15.389453] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.389460] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627c20) on tqpair=0x5c0450 00:25:24.840 [2024-05-13 03:06:15.389477] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.389486] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.389493] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.840 [2024-05-13 03:06:15.389503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.840 [2024-05-13 03:06:15.389523] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.840 [2024-05-13 03:06:15.389722] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.840 [2024-05-13 03:06:15.389737] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.840 [2024-05-13 03:06:15.389744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.389751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627c20) on tqpair=0x5c0450 00:25:24.840 [2024-05-13 03:06:15.389768] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.389777] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.389783] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.840 [2024-05-13 03:06:15.389794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.840 [2024-05-13 03:06:15.389814] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.840 [2024-05-13 03:06:15.389991] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.840 [2024-05-13 03:06:15.390002] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.840 [2024-05-13 03:06:15.390009] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.390016] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627c20) on tqpair=0x5c0450 00:25:24.840 [2024-05-13 03:06:15.390032] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.390041] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.390047] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.840 [2024-05-13 03:06:15.390061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.840 [2024-05-13 03:06:15.390083] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.840 [2024-05-13 03:06:15.390311] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.840 [2024-05-13 03:06:15.390323] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.840 [2024-05-13 03:06:15.390330] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.390337] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627c20) on tqpair=0x5c0450 00:25:24.840 [2024-05-13 03:06:15.390353] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.390362] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.390369] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.840 [2024-05-13 03:06:15.390379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.840 [2024-05-13 03:06:15.390400] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.840 [2024-05-13 03:06:15.390633] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.840 [2024-05-13 03:06:15.390648] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.840 [2024-05-13 03:06:15.390655] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.390662] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627c20) on tqpair=0x5c0450 00:25:24.840 [2024-05-13 03:06:15.390679] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.390688] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.394701] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c0450) 00:25:24.840 [2024-05-13 03:06:15.394720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.840 [2024-05-13 03:06:15.394742] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x627c20, cid 3, qid 0 00:25:24.840 [2024-05-13 03:06:15.394991] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.840 [2024-05-13 03:06:15.395003] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.840 [2024-05-13 03:06:15.395010] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.840 [2024-05-13 03:06:15.395017] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x627c20) on tqpair=0x5c0450 00:25:24.840 [2024-05-13 03:06:15.395030] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:24.840 0 Kelvin (-273 Celsius) 00:25:24.840 Available Spare: 0% 00:25:24.840 Available Spare Threshold: 0% 00:25:24.840 Life Percentage Used: 0% 00:25:24.840 Data Units Read: 0 00:25:24.840 Data Units Written: 0 00:25:24.840 Host Read Commands: 0 00:25:24.840 Host Write Commands: 0 00:25:24.840 Controller Busy Time: 0 minutes 00:25:24.840 Power Cycles: 0 00:25:24.840 Power On Hours: 0 hours 00:25:24.840 Unsafe Shutdowns: 0 00:25:24.840 Unrecoverable Media Errors: 0 00:25:24.840 Lifetime Error Log Entries: 0 00:25:24.840 Warning Temperature Time: 0 minutes 00:25:24.840 Critical Temperature Time: 0 minutes 00:25:24.840 00:25:24.840 Number of Queues 00:25:24.840 ================ 00:25:24.840 Number of I/O Submission Queues: 127 00:25:24.840 Number of I/O Completion Queues: 127 00:25:24.840 00:25:24.840 Active Namespaces 00:25:24.840 ================= 00:25:24.840 Namespace ID:1 00:25:24.840 Error Recovery Timeout: Unlimited 00:25:24.840 Command Set Identifier: NVM (00h) 00:25:24.840 Deallocate: Supported 00:25:24.840 Deallocated/Unwritten Error: Not Supported 00:25:24.840 Deallocated Read Value: Unknown 00:25:24.840 Deallocate in Write Zeroes: Not Supported 00:25:24.840 Deallocated Guard Field: 0xFFFF 00:25:24.840 Flush: Supported 00:25:24.840 Reservation: Supported 00:25:24.840 Namespace Sharing Capabilities: Multiple Controllers 00:25:24.840 Size (in LBAs): 131072 (0GiB) 00:25:24.840 Capacity (in LBAs): 131072 (0GiB) 00:25:24.840 Utilization (in LBAs): 131072 (0GiB) 00:25:24.840 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:24.840 EUI64: ABCDEF0123456789 00:25:24.840 UUID: fd8a4fe4-725d-40e2-985e-600291093bc4 00:25:24.840 Thin Provisioning: Not Supported 00:25:24.840 Per-NS Atomic Units: Yes 00:25:24.840 Atomic Boundary Size (Normal): 0 00:25:24.840 Atomic Boundary Size (PFail): 0 00:25:24.840 Atomic Boundary Offset: 0 00:25:24.840 Maximum Single Source Range Length: 65535 00:25:24.840 Maximum Copy Length: 65535 00:25:24.840 Maximum Source Range Count: 1 00:25:24.840 NGUID/EUI64 Never Reused: No 00:25:24.840 Namespace Write Protected: No 00:25:24.840 Number of LBA Formats: 1 00:25:24.840 Current LBA Format: LBA Format #00 00:25:24.840 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:24.840 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.840 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:24.840 rmmod nvme_tcp 00:25:24.840 rmmod nvme_fabrics 00:25:24.840 rmmod nvme_keyring 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 424943 ']' 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 424943 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 424943 ']' 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 424943 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 424943 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 424943' 00:25:24.841 killing process with pid 424943 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 424943 00:25:24.841 [2024-05-13 03:06:15.506550] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:24.841 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 424943 00:25:25.101 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:25.101 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:25.101 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:25.101 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.101 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:25.101 03:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.101 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.101 03:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.006 03:06:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:27.265 00:25:27.265 real 0m5.610s 00:25:27.265 user 0m4.637s 00:25:27.265 sys 0m1.904s 00:25:27.265 03:06:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:27.265 03:06:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:27.265 ************************************ 00:25:27.265 END TEST nvmf_identify 00:25:27.265 ************************************ 00:25:27.265 03:06:17 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:27.265 03:06:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:27.265 03:06:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:27.265 03:06:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.265 ************************************ 00:25:27.265 START TEST nvmf_perf 00:25:27.265 ************************************ 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:27.265 * Looking for test storage... 00:25:27.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.265 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.266 03:06:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:29.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:29.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:29.170 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.170 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:29.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.171 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.430 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.430 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.430 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:29.430 03:06:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:29.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:25:29.430 00:25:29.430 --- 10.0.0.2 ping statistics --- 00:25:29.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.430 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:25:29.430 00:25:29.430 --- 10.0.0.1 ping statistics --- 00:25:29.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.430 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=427014 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 427014 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 427014 ']' 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:29.430 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:29.430 [2024-05-13 03:06:20.129562] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:25:29.430 [2024-05-13 03:06:20.129650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.430 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.430 [2024-05-13 03:06:20.168084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:29.430 [2024-05-13 03:06:20.196821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.689 [2024-05-13 03:06:20.285686] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.689 [2024-05-13 03:06:20.285766] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.689 [2024-05-13 03:06:20.285789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.689 [2024-05-13 03:06:20.285801] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.689 [2024-05-13 03:06:20.285811] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.689 [2024-05-13 03:06:20.285872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.689 [2024-05-13 03:06:20.285933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.689 [2024-05-13 03:06:20.285958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.689 [2024-05-13 03:06:20.285960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.689 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:29.689 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:25:29.689 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:29.689 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:29.689 03:06:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:29.689 03:06:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.689 03:06:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:29.689 03:06:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:32.967 03:06:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:32.967 03:06:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:33.224 03:06:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:33.224 03:06:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:33.481 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:33.481 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:33.481 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:33.481 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:33.481 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:33.739 [2024-05-13 03:06:24.396501] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.739 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.996 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:33.996 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.253 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:34.253 03:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:34.510 03:06:25 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.768 [2024-05-13 03:06:25.375772] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:34.768 [2024-05-13 03:06:25.376080] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.768 03:06:25 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:35.025 03:06:25 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:35.025 03:06:25 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:35.025 03:06:25 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:35.025 03:06:25 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:36.395 Initializing NVMe Controllers 00:25:36.395 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:36.395 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:36.395 Initialization complete. Launching workers. 00:25:36.395 ======================================================== 00:25:36.395 Latency(us) 00:25:36.395 Device Information : IOPS MiB/s Average min max 00:25:36.395 PCIE (0000:88:00.0) NSID 1 from core 0: 86199.28 336.72 370.68 37.69 6261.96 00:25:36.395 ======================================================== 00:25:36.395 Total : 86199.28 336.72 370.68 37.69 6261.96 00:25:36.395 00:25:36.395 03:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:36.395 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.326 Initializing NVMe Controllers 00:25:37.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:37.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:37.326 Initialization complete. Launching workers. 00:25:37.326 ======================================================== 00:25:37.326 Latency(us) 00:25:37.326 Device Information : IOPS MiB/s Average min max 00:25:37.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 93.00 0.36 10886.77 269.32 45458.37 00:25:37.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.00 0.21 19192.22 4994.84 55864.15 00:25:37.326 ======================================================== 00:25:37.326 Total : 147.00 0.57 13937.75 269.32 55864.15 00:25:37.326 00:25:37.326 03:06:28 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.583 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.957 Initializing NVMe Controllers 00:25:38.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:38.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:38.957 Initialization complete. Launching workers. 00:25:38.957 ======================================================== 00:25:38.957 Latency(us) 00:25:38.957 Device Information : IOPS MiB/s Average min max 00:25:38.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6989.95 27.30 4587.91 834.22 8866.68 00:25:38.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3821.40 14.93 8413.65 6047.95 16309.09 00:25:38.957 ======================================================== 00:25:38.957 Total : 10811.35 42.23 5940.17 834.22 16309.09 00:25:38.957 00:25:38.957 03:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:38.957 03:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:38.957 03:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.957 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.532 Initializing NVMe Controllers 00:25:41.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:41.532 Controller IO queue size 128, less than required. 00:25:41.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.532 Controller IO queue size 128, less than required. 00:25:41.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:41.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:41.532 Initialization complete. Launching workers. 00:25:41.532 ======================================================== 00:25:41.532 Latency(us) 00:25:41.532 Device Information : IOPS MiB/s Average min max 00:25:41.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 683.00 170.75 193634.70 99505.30 246156.50 00:25:41.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.50 147.37 226982.09 95968.67 334853.35 00:25:41.532 ======================================================== 00:25:41.532 Total : 1272.50 318.12 209083.25 95968.67 334853.35 00:25:41.532 00:25:41.532 03:06:31 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:41.532 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.532 No valid NVMe controllers or AIO or URING devices found 00:25:41.532 Initializing NVMe Controllers 00:25:41.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:41.532 Controller IO queue size 128, less than required. 00:25:41.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.532 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:41.532 Controller IO queue size 128, less than required. 00:25:41.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.532 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:41.532 WARNING: Some requested NVMe devices were skipped 00:25:41.532 03:06:31 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:41.532 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.060 Initializing NVMe Controllers 00:25:44.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.060 Controller IO queue size 128, less than required. 00:25:44.060 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:44.060 Controller IO queue size 128, less than required. 00:25:44.060 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:44.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:44.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:44.060 Initialization complete. Launching workers. 00:25:44.060 00:25:44.060 ==================== 00:25:44.060 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:44.060 TCP transport: 00:25:44.060 polls: 45132 00:25:44.060 idle_polls: 14808 00:25:44.060 sock_completions: 30324 00:25:44.060 nvme_completions: 3081 00:25:44.060 submitted_requests: 4610 00:25:44.060 queued_requests: 1 00:25:44.060 00:25:44.060 ==================== 00:25:44.060 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:44.060 TCP transport: 00:25:44.060 polls: 46241 00:25:44.060 idle_polls: 16437 00:25:44.060 sock_completions: 29804 00:25:44.060 nvme_completions: 2913 00:25:44.060 submitted_requests: 4398 00:25:44.060 queued_requests: 1 00:25:44.060 ======================================================== 00:25:44.060 Latency(us) 00:25:44.060 Device Information : IOPS MiB/s Average min max 00:25:44.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 769.48 192.37 171707.81 90327.83 238844.61 00:25:44.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 727.51 181.88 182095.09 72667.94 239534.98 00:25:44.060 ======================================================== 00:25:44.060 Total : 1496.99 374.25 176755.83 72667.94 239534.98 00:25:44.060 00:25:44.060 03:06:34 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:44.060 03:06:34 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.318 03:06:34 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:44.318 03:06:34 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:25:44.318 03:06:34 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:47.596 03:06:38 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=b0ec664a-77e0-42ac-97c5-19fd22c22ad3 00:25:47.596 03:06:38 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb b0ec664a-77e0-42ac-97c5-19fd22c22ad3 00:25:47.596 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=b0ec664a-77e0-42ac-97c5-19fd22c22ad3 00:25:47.596 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:25:47.596 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:25:47.596 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:25:47.596 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:47.856 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:25:47.856 { 00:25:47.856 "uuid": "b0ec664a-77e0-42ac-97c5-19fd22c22ad3", 00:25:47.856 "name": "lvs_0", 00:25:47.856 "base_bdev": "Nvme0n1", 00:25:47.856 "total_data_clusters": 238234, 00:25:47.856 "free_clusters": 238234, 00:25:47.856 "block_size": 512, 00:25:47.856 "cluster_size": 4194304 00:25:47.856 } 00:25:47.856 ]' 00:25:47.857 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="b0ec664a-77e0-42ac-97c5-19fd22c22ad3") .free_clusters' 00:25:47.857 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:25:47.857 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="b0ec664a-77e0-42ac-97c5-19fd22c22ad3") .cluster_size' 00:25:47.857 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:25:47.857 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:25:47.857 03:06:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:25:47.857 952936 00:25:47.857 03:06:38 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:25:47.857 03:06:38 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:25:47.857 03:06:38 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b0ec664a-77e0-42ac-97c5-19fd22c22ad3 lbd_0 20480 00:25:48.116 03:06:38 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e25011aa-8482-4532-a082-54c4eca2a726 00:25:48.116 03:06:38 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e25011aa-8482-4532-a082-54c4eca2a726 lvs_n_0 00:25:49.047 03:06:39 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=6be47c77-c299-40d0-a7cd-09aa89aa7782 00:25:49.048 03:06:39 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 6be47c77-c299-40d0-a7cd-09aa89aa7782 00:25:49.048 03:06:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=6be47c77-c299-40d0-a7cd-09aa89aa7782 00:25:49.048 03:06:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:25:49.048 03:06:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:25:49.048 03:06:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:25:49.048 03:06:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:49.305 03:06:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:25:49.305 { 00:25:49.305 "uuid": "b0ec664a-77e0-42ac-97c5-19fd22c22ad3", 00:25:49.305 "name": "lvs_0", 00:25:49.305 "base_bdev": "Nvme0n1", 00:25:49.305 "total_data_clusters": 238234, 00:25:49.305 "free_clusters": 233114, 00:25:49.305 "block_size": 512, 00:25:49.305 "cluster_size": 4194304 00:25:49.305 }, 00:25:49.305 { 00:25:49.305 "uuid": "6be47c77-c299-40d0-a7cd-09aa89aa7782", 00:25:49.305 "name": "lvs_n_0", 00:25:49.305 "base_bdev": "e25011aa-8482-4532-a082-54c4eca2a726", 00:25:49.305 "total_data_clusters": 5114, 00:25:49.305 "free_clusters": 5114, 00:25:49.305 "block_size": 512, 00:25:49.305 "cluster_size": 4194304 00:25:49.305 } 00:25:49.305 ]' 00:25:49.305 03:06:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="6be47c77-c299-40d0-a7cd-09aa89aa7782") .free_clusters' 00:25:49.305 03:06:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:25:49.305 03:06:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="6be47c77-c299-40d0-a7cd-09aa89aa7782") .cluster_size' 00:25:49.562 03:06:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:25:49.562 03:06:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:25:49.562 03:06:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:25:49.562 20456 00:25:49.562 03:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:49.562 03:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6be47c77-c299-40d0-a7cd-09aa89aa7782 lbd_nest_0 20456 00:25:49.819 03:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=24550e0a-457c-4589-9ef0-e0568d5a5597 00:25:49.819 03:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.076 03:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:50.076 03:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 24550e0a-457c-4589-9ef0-e0568d5a5597 00:25:50.333 03:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.590 03:06:41 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:50.590 03:06:41 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:50.590 03:06:41 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:50.590 03:06:41 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:50.590 03:06:41 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:50.590 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.781 Initializing NVMe Controllers 00:26:02.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:02.781 Initialization complete. Launching workers. 00:26:02.781 ======================================================== 00:26:02.781 Latency(us) 00:26:02.781 Device Information : IOPS MiB/s Average min max 00:26:02.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.89 0.02 22345.67 269.86 48627.09 00:26:02.781 ======================================================== 00:26:02.781 Total : 44.89 0.02 22345.67 269.86 48627.09 00:26:02.781 00:26:02.781 03:06:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:02.781 03:06:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:02.781 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.743 Initializing NVMe Controllers 00:26:12.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.743 Initialization complete. Launching workers. 00:26:12.743 ======================================================== 00:26:12.743 Latency(us) 00:26:12.743 Device Information : IOPS MiB/s Average min max 00:26:12.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.80 10.60 11808.97 6034.97 19972.65 00:26:12.743 ======================================================== 00:26:12.743 Total : 84.80 10.60 11808.97 6034.97 19972.65 00:26:12.743 00:26:12.743 03:07:02 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:12.743 03:07:02 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:12.743 03:07:02 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:12.743 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.700 Initializing NVMe Controllers 00:26:22.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.700 Initialization complete. Launching workers. 00:26:22.700 ======================================================== 00:26:22.700 Latency(us) 00:26:22.700 Device Information : IOPS MiB/s Average min max 00:26:22.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6857.70 3.35 4665.90 331.11 12136.61 00:26:22.700 ======================================================== 00:26:22.700 Total : 6857.70 3.35 4665.90 331.11 12136.61 00:26:22.700 00:26:22.700 03:07:12 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:22.700 03:07:12 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:22.700 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.668 Initializing NVMe Controllers 00:26:32.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:32.668 Initialization complete. Launching workers. 00:26:32.668 ======================================================== 00:26:32.668 Latency(us) 00:26:32.668 Device Information : IOPS MiB/s Average min max 00:26:32.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1360.60 170.07 23552.62 1997.84 47946.44 00:26:32.668 ======================================================== 00:26:32.668 Total : 1360.60 170.07 23552.62 1997.84 47946.44 00:26:32.668 00:26:32.668 03:07:22 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:32.668 03:07:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:32.668 03:07:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.668 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.642 Initializing NVMe Controllers 00:26:42.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:42.642 Controller IO queue size 128, less than required. 00:26:42.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:42.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:42.642 Initialization complete. Launching workers. 00:26:42.642 ======================================================== 00:26:42.642 Latency(us) 00:26:42.642 Device Information : IOPS MiB/s Average min max 00:26:42.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11272.60 5.50 11367.98 2809.29 58249.10 00:26:42.642 ======================================================== 00:26:42.642 Total : 11272.60 5.50 11367.98 2809.29 58249.10 00:26:42.642 00:26:42.642 03:07:33 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:42.642 03:07:33 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.642 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.842 Initializing NVMe Controllers 00:26:54.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:54.842 Controller IO queue size 128, less than required. 00:26:54.842 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:54.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:54.842 Initialization complete. Launching workers. 00:26:54.842 ======================================================== 00:26:54.842 Latency(us) 00:26:54.842 Device Information : IOPS MiB/s Average min max 00:26:54.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1198.70 149.84 107258.26 23934.47 230658.38 00:26:54.842 ======================================================== 00:26:54.842 Total : 1198.70 149.84 107258.26 23934.47 230658.38 00:26:54.842 00:26:54.842 03:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.842 03:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 24550e0a-457c-4589-9ef0-e0568d5a5597 00:26:54.842 03:07:44 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:54.842 03:07:44 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e25011aa-8482-4532-a082-54c4eca2a726 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:54.842 rmmod nvme_tcp 00:26:54.842 rmmod nvme_fabrics 00:26:54.842 rmmod nvme_keyring 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 427014 ']' 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 427014 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 427014 ']' 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 427014 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 427014 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 427014' 00:26:54.842 killing process with pid 427014 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 427014 00:26:54.842 [2024-05-13 03:07:45.543203] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:54.842 03:07:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 427014 00:26:56.740 03:07:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.740 03:07:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.740 03:07:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.741 03:07:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.741 03:07:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.741 03:07:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.741 03:07:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.741 03:07:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.640 03:07:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.640 00:26:58.640 real 1m31.336s 00:26:58.640 user 5m34.832s 00:26:58.640 sys 0m16.417s 00:26:58.640 03:07:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:58.640 03:07:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:58.640 ************************************ 00:26:58.640 END TEST nvmf_perf 00:26:58.640 ************************************ 00:26:58.640 03:07:49 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:58.640 03:07:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:58.640 03:07:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:58.640 03:07:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.640 ************************************ 00:26:58.640 START TEST nvmf_fio_host 00:26:58.640 ************************************ 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:58.640 * Looking for test storage... 00:26:58.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.640 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:58.641 03:07:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:00.541 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:00.541 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:00.541 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:00.541 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:00.541 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.542 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:00.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:27:00.800 00:27:00.800 --- 10.0.0.2 ping statistics --- 00:27:00.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.800 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:27:00.800 00:27:00.800 --- 10.0.0.1 ping statistics --- 00:27:00.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.800 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:00.800 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=438989 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 438989 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 438989 ']' 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:00.801 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.801 [2024-05-13 03:07:51.505002] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:27:00.801 [2024-05-13 03:07:51.505089] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.801 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.801 [2024-05-13 03:07:51.548529] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:00.801 [2024-05-13 03:07:51.575397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.059 [2024-05-13 03:07:51.664236] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.059 [2024-05-13 03:07:51.664287] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.059 [2024-05-13 03:07:51.664311] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.059 [2024-05-13 03:07:51.664322] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.059 [2024-05-13 03:07:51.664331] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.059 [2024-05-13 03:07:51.664465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.059 [2024-05-13 03:07:51.664498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.059 [2024-05-13 03:07:51.664566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.059 [2024-05-13 03:07:51.664569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.059 [2024-05-13 03:07:51.782199] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.059 Malloc1 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.059 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.060 [2024-05-13 03:07:51.852913] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:01.060 [2024-05-13 03:07:51.853211] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.060 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.060 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:01.060 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.060 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:01.318 03:07:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:01.318 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:01.318 fio-3.35 00:27:01.318 Starting 1 thread 00:27:01.318 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.844 00:27:03.844 test: (groupid=0, jobs=1): err= 0: pid=439206: Mon May 13 03:07:54 2024 00:27:03.844 read: IOPS=9109, BW=35.6MiB/s (37.3MB/s)(71.4MiB/2006msec) 00:27:03.844 slat (nsec): min=1771, max=158828, avg=2468.07, stdev=1796.70 00:27:03.844 clat (usec): min=4956, max=13126, avg=7763.26, stdev=636.85 00:27:03.844 lat (usec): min=4981, max=13129, avg=7765.73, stdev=636.88 00:27:03.844 clat percentiles (usec): 00:27:03.844 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7308], 00:27:03.844 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7898], 00:27:03.844 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:27:03.844 | 99.00th=[ 9372], 99.50th=[10159], 99.90th=[11863], 99.95th=[12518], 00:27:03.844 | 99.99th=[12649] 00:27:03.844 bw ( KiB/s): min=35264, max=37152, per=99.89%, avg=36398.00, stdev=802.51, samples=4 00:27:03.844 iops : min= 8816, max= 9288, avg=9099.50, stdev=200.63, samples=4 00:27:03.844 write: IOPS=9119, BW=35.6MiB/s (37.4MB/s)(71.5MiB/2006msec); 0 zone resets 00:27:03.844 slat (nsec): min=1914, max=145093, avg=2640.11, stdev=1459.51 00:27:03.844 clat (usec): min=1512, max=11022, avg=6196.08, stdev=537.19 00:27:03.844 lat (usec): min=1521, max=11024, avg=6198.72, stdev=537.17 00:27:03.844 clat percentiles (usec): 00:27:03.844 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5800], 00:27:03.845 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:27:03.845 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6849], 95.00th=[ 6980], 00:27:03.845 | 99.00th=[ 7373], 99.50th=[ 7635], 99.90th=[ 9503], 99.95th=[10290], 00:27:03.845 | 99.99th=[10945] 00:27:03.845 bw ( KiB/s): min=36040, max=36928, per=100.00%, avg=36482.00, stdev=362.53, samples=4 00:27:03.845 iops : min= 9010, max= 9232, avg=9120.50, stdev=90.63, samples=4 00:27:03.845 lat (msec) : 2=0.01%, 4=0.06%, 10=99.60%, 20=0.34% 00:27:03.845 cpu : usr=45.64%, sys=41.90%, ctx=77, majf=0, minf=37 00:27:03.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:03.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:03.845 issued rwts: total=18274,18294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:03.845 00:27:03.845 Run status group 0 (all jobs): 00:27:03.845 READ: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.4MiB (74.8MB), run=2006-2006msec 00:27:03.845 WRITE: bw=35.6MiB/s (37.4MB/s), 35.6MiB/s-35.6MiB/s (37.4MB/s-37.4MB/s), io=71.5MiB (74.9MB), run=2006-2006msec 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:03.845 03:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:04.102 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:04.102 fio-3.35 00:27:04.102 Starting 1 thread 00:27:04.102 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.630 00:27:06.630 test: (groupid=0, jobs=1): err= 0: pid=439655: Mon May 13 03:07:57 2024 00:27:06.630 read: IOPS=7379, BW=115MiB/s (121MB/s)(231MiB/2007msec) 00:27:06.630 slat (usec): min=2, max=111, avg= 3.65, stdev= 1.67 00:27:06.630 clat (usec): min=3696, max=24844, avg=10778.86, stdev=2799.81 00:27:06.630 lat (usec): min=3700, max=24847, avg=10782.52, stdev=2799.91 00:27:06.630 clat percentiles (usec): 00:27:06.630 | 1.00th=[ 5276], 5.00th=[ 6325], 10.00th=[ 7177], 20.00th=[ 8455], 00:27:06.630 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11338], 00:27:06.630 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14353], 95.00th=[15926], 00:27:06.630 | 99.00th=[17957], 99.50th=[18482], 99.90th=[20055], 99.95th=[20317], 00:27:06.630 | 99.99th=[22414] 00:27:06.630 bw ( KiB/s): min=47328, max=68992, per=50.61%, avg=59760.00, stdev=10496.86, samples=4 00:27:06.630 iops : min= 2958, max= 4312, avg=3735.00, stdev=656.05, samples=4 00:27:06.630 write: IOPS=4227, BW=66.0MiB/s (69.3MB/s)(121MiB/1836msec); 0 zone resets 00:27:06.630 slat (usec): min=30, max=200, avg=33.60, stdev= 5.69 00:27:06.630 clat (usec): min=5301, max=20499, avg=11555.86, stdev=2033.71 00:27:06.630 lat (usec): min=5332, max=20530, avg=11589.46, stdev=2034.82 00:27:06.630 clat percentiles (usec): 00:27:06.630 | 1.00th=[ 7767], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:27:06.630 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:27:06.630 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14091], 95.00th=[15270], 00:27:06.630 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:27:06.630 | 99.99th=[20579] 00:27:06.630 bw ( KiB/s): min=49216, max=71648, per=91.80%, avg=62088.00, stdev=10728.59, samples=4 00:27:06.630 iops : min= 3076, max= 4478, avg=3880.50, stdev=670.54, samples=4 00:27:06.630 lat (msec) : 4=0.03%, 10=34.60%, 20=65.31%, 50=0.06% 00:27:06.630 cpu : usr=77.58%, sys=18.34%, ctx=19, majf=0, minf=53 00:27:06.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:06.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:06.630 issued rwts: total=14811,7761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:06.630 00:27:06.630 Run status group 0 (all jobs): 00:27:06.630 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=231MiB (243MB), run=2007-2007msec 00:27:06.630 WRITE: bw=66.0MiB/s (69.3MB/s), 66.0MiB/s-66.0MiB/s (69.3MB/s-69.3MB/s), io=121MiB (127MB), run=1836-1836msec 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.630 03:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.910 Nvme0n1 00:27:09.910 03:08:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.910 03:08:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:09.910 03:08:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.910 03:08:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=97af83e6-5f29-4007-a67f-48603aff370b 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb 97af83e6-5f29-4007-a67f-48603aff370b 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=97af83e6-5f29-4007-a67f-48603aff370b 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:27:12.437 { 00:27:12.437 "uuid": "97af83e6-5f29-4007-a67f-48603aff370b", 00:27:12.437 "name": "lvs_0", 00:27:12.437 "base_bdev": "Nvme0n1", 00:27:12.437 "total_data_clusters": 930, 00:27:12.437 "free_clusters": 930, 00:27:12.437 "block_size": 512, 00:27:12.437 "cluster_size": 1073741824 00:27:12.437 } 00:27:12.437 ]' 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="97af83e6-5f29-4007-a67f-48603aff370b") .free_clusters' 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="97af83e6-5f29-4007-a67f-48603aff370b") .cluster_size' 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:27:12.437 952320 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.437 2bb7f522-ec14-4bf4-907c-9fd8d1d08fc3 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.437 03:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:12.438 03:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:12.438 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:12.438 fio-3.35 00:27:12.438 Starting 1 thread 00:27:12.438 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.993 00:27:14.993 test: (groupid=0, jobs=1): err= 0: pid=440674: Mon May 13 03:08:05 2024 00:27:14.993 read: IOPS=6188, BW=24.2MiB/s (25.3MB/s)(48.6MiB/2009msec) 00:27:14.993 slat (nsec): min=1899, max=149465, avg=2554.85, stdev=2169.61 00:27:14.993 clat (usec): min=1439, max=172235, avg=11419.75, stdev=11470.03 00:27:14.993 lat (usec): min=1443, max=172263, avg=11422.31, stdev=11470.40 00:27:14.993 clat percentiles (msec): 00:27:14.993 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:27:14.993 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:27:14.993 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:27:14.993 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 174], 00:27:14.993 | 99.99th=[ 174] 00:27:14.993 bw ( KiB/s): min=17200, max=27640, per=99.94%, avg=24738.00, stdev=5038.06, samples=4 00:27:14.993 iops : min= 4300, max= 6910, avg=6184.50, stdev=1259.51, samples=4 00:27:14.993 write: IOPS=6179, BW=24.1MiB/s (25.3MB/s)(48.5MiB/2009msec); 0 zone resets 00:27:14.993 slat (usec): min=2, max=153, avg= 2.68, stdev= 1.77 00:27:14.993 clat (usec): min=441, max=170513, avg=9140.78, stdev=10777.52 00:27:14.993 lat (usec): min=445, max=170522, avg=9143.45, stdev=10777.96 00:27:14.993 clat percentiles (msec): 00:27:14.993 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:27:14.993 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:27:14.993 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:27:14.993 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 171], 00:27:14.993 | 99.99th=[ 171] 00:27:14.993 bw ( KiB/s): min=18168, max=27072, per=99.93%, avg=24702.00, stdev=4361.95, samples=4 00:27:14.993 iops : min= 4542, max= 6768, avg=6175.50, stdev=1090.49, samples=4 00:27:14.993 lat (usec) : 500=0.01%, 1000=0.01% 00:27:14.993 lat (msec) : 2=0.03%, 4=0.10%, 10=61.43%, 20=37.91%, 250=0.52% 00:27:14.993 cpu : usr=51.49%, sys=40.34%, ctx=72, majf=0, minf=37 00:27:14.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:14.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:14.993 issued rwts: total=12432,12415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:14.993 00:27:14.993 Run status group 0 (all jobs): 00:27:14.993 READ: bw=24.2MiB/s (25.3MB/s), 24.2MiB/s-24.2MiB/s (25.3MB/s-25.3MB/s), io=48.6MiB (50.9MB), run=2009-2009msec 00:27:14.993 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=48.5MiB (50.9MB), run=2009-2009msec 00:27:14.993 03:08:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:14.993 03:08:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.993 03:08:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.993 03:08:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.993 03:08:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:14.993 03:08:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.993 03:08:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=596d677c-6664-42a2-ba7a-caacb9c95f83 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb 596d677c-6664-42a2-ba7a-caacb9c95f83 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=596d677c-6664-42a2-ba7a-caacb9c95f83 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.922 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:27:15.922 { 00:27:15.922 "uuid": "97af83e6-5f29-4007-a67f-48603aff370b", 00:27:15.922 "name": "lvs_0", 00:27:15.922 "base_bdev": "Nvme0n1", 00:27:15.922 "total_data_clusters": 930, 00:27:15.922 "free_clusters": 0, 00:27:15.922 "block_size": 512, 00:27:15.922 "cluster_size": 1073741824 00:27:15.922 }, 00:27:15.922 { 00:27:15.922 "uuid": "596d677c-6664-42a2-ba7a-caacb9c95f83", 00:27:15.922 "name": "lvs_n_0", 00:27:15.922 "base_bdev": "2bb7f522-ec14-4bf4-907c-9fd8d1d08fc3", 00:27:15.923 "total_data_clusters": 237847, 00:27:15.923 "free_clusters": 237847, 00:27:15.923 "block_size": 512, 00:27:15.923 "cluster_size": 4194304 00:27:15.923 } 00:27:15.923 ]' 00:27:15.923 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="596d677c-6664-42a2-ba7a-caacb9c95f83") .free_clusters' 00:27:15.923 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:27:15.923 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="596d677c-6664-42a2-ba7a-caacb9c95f83") .cluster_size' 00:27:15.923 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:27:15.923 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:27:15.923 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:27:15.923 951388 00:27:15.923 03:08:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:15.923 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.923 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.180 e2afde38-69c9-4b35-8077-b0596b3eaba8 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.180 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:27:16.437 03:08:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:16.437 03:08:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.437 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:16.437 fio-3.35 00:27:16.437 Starting 1 thread 00:27:16.437 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.957 00:27:18.957 test: (groupid=0, jobs=1): err= 0: pid=441225: Mon May 13 03:08:09 2024 00:27:18.957 read: IOPS=6132, BW=24.0MiB/s (25.1MB/s)(48.1MiB/2008msec) 00:27:18.957 slat (nsec): min=1895, max=181161, avg=2547.09, stdev=2421.60 00:27:18.957 clat (usec): min=5752, max=19268, avg=11555.15, stdev=955.57 00:27:18.957 lat (usec): min=5763, max=19270, avg=11557.70, stdev=955.46 00:27:18.957 clat percentiles (usec): 00:27:18.957 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:27:18.957 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:27:18.957 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:27:18.957 | 99.00th=[13698], 99.50th=[14091], 99.90th=[16319], 99.95th=[17695], 00:27:18.957 | 99.99th=[19268] 00:27:18.957 bw ( KiB/s): min=23096, max=25096, per=99.88%, avg=24502.00, stdev=947.66, samples=4 00:27:18.957 iops : min= 5774, max= 6274, avg=6125.50, stdev=236.92, samples=4 00:27:18.957 write: IOPS=6116, BW=23.9MiB/s (25.1MB/s)(48.0MiB/2008msec); 0 zone resets 00:27:18.957 slat (usec): min=2, max=131, avg= 2.77, stdev= 1.86 00:27:18.957 clat (usec): min=4362, max=16566, avg=9177.94, stdev=860.55 00:27:18.957 lat (usec): min=4370, max=16569, avg=9180.70, stdev=860.51 00:27:18.957 clat percentiles (usec): 00:27:18.957 | 1.00th=[ 7242], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8586], 00:27:18.957 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:27:18.957 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:27:18.957 | 99.00th=[11076], 99.50th=[11469], 99.90th=[15533], 99.95th=[16319], 00:27:18.957 | 99.99th=[16581] 00:27:18.957 bw ( KiB/s): min=24088, max=24640, per=99.89%, avg=24438.00, stdev=262.69, samples=4 00:27:18.957 iops : min= 6022, max= 6160, avg=6109.50, stdev=65.67, samples=4 00:27:18.957 lat (msec) : 10=44.70%, 20=55.30% 00:27:18.957 cpu : usr=50.12%, sys=41.75%, ctx=79, majf=0, minf=37 00:27:18.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:18.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:18.958 issued rwts: total=12315,12282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:18.958 00:27:18.958 Run status group 0 (all jobs): 00:27:18.958 READ: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=48.1MiB (50.4MB), run=2008-2008msec 00:27:18.958 WRITE: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.0MiB (50.3MB), run=2008-2008msec 00:27:18.958 03:08:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:18.958 03:08:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.958 03:08:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.958 03:08:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.958 03:08:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:27:18.958 03:08:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:18.958 03:08:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.958 03:08:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.138 03:08:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.138 03:08:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:27:23.138 03:08:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.138 03:08:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.138 03:08:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.138 03:08:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:27:23.138 03:08:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.138 03:08:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.037 03:08:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.037 03:08:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:27:25.037 03:08:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.037 03:08:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.037 03:08:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.037 03:08:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:27:25.037 03:08:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.037 03:08:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.936 rmmod nvme_tcp 00:27:26.936 rmmod nvme_fabrics 00:27:26.936 rmmod nvme_keyring 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 438989 ']' 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 438989 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 438989 ']' 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 438989 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 438989 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 438989' 00:27:26.936 killing process with pid 438989 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 438989 00:27:26.936 [2024-05-13 03:08:17.572745] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:26.936 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 438989 00:27:27.195 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.195 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.195 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.195 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.195 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.195 03:08:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.195 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.195 03:08:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.096 03:08:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:29.096 00:27:29.096 real 0m30.622s 00:27:29.096 user 1m50.222s 00:27:29.096 sys 0m6.152s 00:27:29.096 03:08:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:29.096 03:08:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.096 ************************************ 00:27:29.096 END TEST nvmf_fio_host 00:27:29.096 ************************************ 00:27:29.355 03:08:19 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:29.355 03:08:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:29.355 03:08:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:29.355 03:08:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:29.355 ************************************ 00:27:29.355 START TEST nvmf_failover 00:27:29.355 ************************************ 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:29.355 * Looking for test storage... 00:27:29.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.355 03:08:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.355 03:08:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:29.355 03:08:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:29.355 03:08:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:27:29.355 03:08:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:31.254 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:31.254 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.254 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:31.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:31.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:31.255 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:31.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:27:31.514 00:27:31.514 --- 10.0.0.2 ping statistics --- 00:27:31.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.514 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:27:31.514 00:27:31.514 --- 10.0.0.1 ping statistics --- 00:27:31.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.514 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=444372 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 444372 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 444372 ']' 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:31.514 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.514 [2024-05-13 03:08:22.266791] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:27:31.514 [2024-05-13 03:08:22.266866] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.514 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.514 [2024-05-13 03:08:22.306458] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:31.774 [2024-05-13 03:08:22.334141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:31.774 [2024-05-13 03:08:22.419445] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.774 [2024-05-13 03:08:22.419497] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.774 [2024-05-13 03:08:22.419510] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.774 [2024-05-13 03:08:22.419522] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.774 [2024-05-13 03:08:22.419531] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.774 [2024-05-13 03:08:22.419584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.774 [2024-05-13 03:08:22.419641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.774 [2024-05-13 03:08:22.419644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.774 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:31.774 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:27:31.774 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.774 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.774 03:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.774 03:08:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.774 03:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:32.064 [2024-05-13 03:08:22.781791] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.064 03:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:32.323 Malloc0 00:27:32.323 03:08:23 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.580 03:08:23 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:32.838 03:08:23 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.096 [2024-05-13 03:08:23.794640] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:33.096 [2024-05-13 03:08:23.794933] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.096 03:08:23 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:33.354 [2024-05-13 03:08:24.035554] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:33.354 03:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:33.611 [2024-05-13 03:08:24.280466] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=444562 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 444562 /var/tmp/bdevperf.sock 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 444562 ']' 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:33.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:33.611 03:08:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:33.869 03:08:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:33.869 03:08:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:27:33.869 03:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:34.434 NVMe0n1 00:27:34.434 03:08:25 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:34.691 00:27:34.691 03:08:25 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=444679 00:27:34.691 03:08:25 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:34.691 03:08:25 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:35.622 03:08:26 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.880 [2024-05-13 03:08:26.596260] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1253ba0 is same with the state(5) to be set 00:27:35.880 [2024-05-13 03:08:26.596322] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1253ba0 is same with the state(5) to be set 00:27:35.880 [2024-05-13 03:08:26.596346] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1253ba0 is same with the state(5) to be set 00:27:35.880 [2024-05-13 03:08:26.596358] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1253ba0 is same with the state(5) to be set 00:27:35.880 [2024-05-13 03:08:26.596386] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1253ba0 is same with the state(5) to be set 00:27:35.880 [2024-05-13 03:08:26.596398] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1253ba0 is same with the state(5) to be set 00:27:35.880 [2024-05-13 03:08:26.596410] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1253ba0 is same with the state(5) to be set 00:27:35.880 03:08:26 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:39.160 03:08:29 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:39.419 00:27:39.419 03:08:29 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:39.419 [2024-05-13 03:08:30.213612] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.419 [2024-05-13 03:08:30.213679] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213717] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213731] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213743] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213755] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213768] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213780] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213792] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213805] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213817] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213830] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213842] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213856] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.420 [2024-05-13 03:08:30.213869] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255150 is same with the state(5) to be set 00:27:39.678 03:08:30 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:42.959 03:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.959 [2024-05-13 03:08:33.517813] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.959 03:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:43.892 03:08:34 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:44.150 [2024-05-13 03:08:34.769502] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769572] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769597] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769609] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769621] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769657] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769669] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769681] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769692] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769727] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769741] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769753] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769765] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769777] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769789] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769801] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769813] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769825] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769836] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769849] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769861] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769873] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769886] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769898] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769911] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769923] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769935] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769947] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769958] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769969] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769981] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.769992] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.770022] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.770033] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.150 [2024-05-13 03:08:34.770045] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770056] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770068] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770079] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770090] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770102] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770113] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770124] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770135] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770147] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770158] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770169] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770180] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770192] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770203] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770215] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770226] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770238] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770249] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770261] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770272] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770284] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770295] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770307] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770318] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770332] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770346] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770358] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 [2024-05-13 03:08:34.770370] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255830 is same with the state(5) to be set 00:27:44.151 03:08:34 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 444679 00:27:50.717 0 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 444562 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 444562 ']' 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 444562 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 444562 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 444562' 00:27:50.717 killing process with pid 444562 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 444562 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 444562 00:27:50.717 03:08:40 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:50.717 [2024-05-13 03:08:24.343942] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:27:50.717 [2024-05-13 03:08:24.344077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444562 ] 00:27:50.717 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.717 [2024-05-13 03:08:24.379679] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:50.717 [2024-05-13 03:08:24.409259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.717 [2024-05-13 03:08:24.500845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.717 Running I/O for 15 seconds... 00:27:50.717 [2024-05-13 03:08:26.596804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.596848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.596877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.596893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.596910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.596925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.596941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.596955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.596972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.596986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.717 [2024-05-13 03:08:26.597565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-05-13 03:08:26.597598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-05-13 03:08:26.597636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-05-13 03:08:26.597664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-05-13 03:08:26.597728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-05-13 03:08:26.597744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-05-13 03:08:26.597757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.597772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.597786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.597801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.597815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.597830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.597843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.597859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.597872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.597887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.597901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.597916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.597930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.597945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.597959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.597974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.597991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.598965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.598980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.599016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-05-13 03:08:26.599034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-05-13 03:08:26.599048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.599771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.599975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.599992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-05-13 03:08:26.600047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.600077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.600107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.600137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.600165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.600195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.600223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.600252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.600281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.719 [2024-05-13 03:08:26.600310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-05-13 03:08:26.600326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:26.600340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:26.600368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:26.600408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-05-13 03:08:26.600880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.600896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2e80 is same with the state(5) to be set 00:27:50.720 [2024-05-13 03:08:26.600915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.720 [2024-05-13 03:08:26.600928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.720 [2024-05-13 03:08:26.600940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78792 len:8 PRP1 0x0 PRP2 0x0 00:27:50.720 [2024-05-13 03:08:26.600954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.601039] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14e2e80 was disconnected and freed. reset controller. 00:27:50.720 [2024-05-13 03:08:26.601065] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:50.720 [2024-05-13 03:08:26.601113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.720 [2024-05-13 03:08:26.601132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.601148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.720 [2024-05-13 03:08:26.601162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.601176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.720 [2024-05-13 03:08:26.601189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.601203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.720 [2024-05-13 03:08:26.601217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:26.601230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.720 [2024-05-13 03:08:26.601284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c4120 (9): Bad file descriptor 00:27:50.720 [2024-05-13 03:08:26.604718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.720 [2024-05-13 03:08:26.814222] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:50.720 [2024-05-13 03:08:30.215762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.215807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.215845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.215863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.215881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.215895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.215912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.215942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.215968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.215983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.216041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.216069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.216097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.216126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.216154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.216182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.216211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.216240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-05-13 03:08:30.216272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.720 [2024-05-13 03:08:30.216287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.216977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.216991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.721 [2024-05-13 03:08:30.217294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-05-13 03:08:30.217309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-05-13 03:08:30.217582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-05-13 03:08:30.217611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-05-13 03:08:30.217640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-05-13 03:08:30.217668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-05-13 03:08:30.217722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-05-13 03:08:30.217753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-05-13 03:08:30.217783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.722 [2024-05-13 03:08:30.217941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.217972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.217989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5048 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.722 [2024-05-13 03:08:30.218083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.722 [2024-05-13 03:08:30.218123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.722 [2024-05-13 03:08:30.218150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.722 [2024-05-13 03:08:30.218178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c4120 is same with the state(5) to be set 00:27:50.722 [2024-05-13 03:08:30.218352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.722 [2024-05-13 03:08:30.218371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.218384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.722 [2024-05-13 03:08:30.218429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.218441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5064 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.722 [2024-05-13 03:08:30.218485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.218497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5072 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.722 [2024-05-13 03:08:30.218536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.218548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5080 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.722 [2024-05-13 03:08:30.218587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.218599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.722 [2024-05-13 03:08:30.218638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.218649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5096 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.722 [2024-05-13 03:08:30.218711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.218726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5104 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.722 [2024-05-13 03:08:30.218780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.218792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5112 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.722 [2024-05-13 03:08:30.218832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.722 [2024-05-13 03:08:30.218844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:8 PRP1 0x0 PRP2 0x0 00:27:50.722 [2024-05-13 03:08:30.218857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-05-13 03:08:30.218872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.218884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.218896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5128 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.218914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.218928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.218940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.218952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5136 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.218966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.218980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.218991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5144 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5160 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5168 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5176 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5192 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5200 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5208 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4368 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4376 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4392 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4400 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4408 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4424 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.219957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5224 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.219971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.219985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.219996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.220008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5232 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.220021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.220035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.220047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.220058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5240 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.220072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.220085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.220097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.723 [2024-05-13 03:08:30.220109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:8 PRP1 0x0 PRP2 0x0 00:27:50.723 [2024-05-13 03:08:30.220122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-05-13 03:08:30.220141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.723 [2024-05-13 03:08:30.220153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5256 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5264 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5272 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4432 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4440 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4456 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4464 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4472 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5288 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5296 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5304 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.220949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.220962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.220976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.220988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5320 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.221034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.221048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.221060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4488 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.221085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.221099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.221111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4496 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.221152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.221166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.221178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4504 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.221203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.221217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.221229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.221254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.221268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.221279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4520 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.221304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.221319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.221330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4528 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.221355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.221369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.221380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4536 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.221405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.221423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.221435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:8 PRP1 0x0 PRP2 0x0 00:27:50.724 [2024-05-13 03:08:30.221461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-05-13 03:08:30.221475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.724 [2024-05-13 03:08:30.221487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.724 [2024-05-13 03:08:30.221499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5328 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.221526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.221538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.221549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4552 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.221577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.221588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.221600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4560 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.221627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.221639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.221651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4568 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.221678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.221690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.221709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.221737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.221748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.221760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4584 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.221788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.221799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.221811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4592 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.221843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.221861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.221873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4600 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.221901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.221913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.221925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.221953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.221964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.221976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4616 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.221990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4624 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4632 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4648 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4656 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4664 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4680 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4688 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4696 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-05-13 03:08:30.222538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.725 [2024-05-13 03:08:30.222549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.725 [2024-05-13 03:08:30.222561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:8 PRP1 0x0 PRP2 0x0 00:27:50.725 [2024-05-13 03:08:30.222574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.222588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.222599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.222611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4712 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.222624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.222638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.222667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.222680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4720 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.222693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.222729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.222742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.222754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4728 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.222772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.222787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.222798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.222810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.222823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.222837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.222849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.222861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4744 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.222874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.222888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.222899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.222911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4752 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.222925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.222938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.222949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.222961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4760 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.222974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.222988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4776 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4784 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4792 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4808 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4816 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4824 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4840 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4848 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4856 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4872 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4880 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4888 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-05-13 03:08:30.223884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.726 [2024-05-13 03:08:30.223895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.726 [2024-05-13 03:08:30.223907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:8 PRP1 0x0 PRP2 0x0 00:27:50.726 [2024-05-13 03:08:30.223921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.223935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.223947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.223962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4904 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.223977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.223991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.224003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.224014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4912 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.224028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.224042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.224053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.224065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4920 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.224079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.224093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.224104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.224116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.224130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.224144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.224160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.224173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4936 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.224186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.224200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.224212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.224224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4944 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.224237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.224251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.224263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.224274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4952 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.224288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.224302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.224314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.224336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.224350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.229745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.229780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.229796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4968 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.229811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.229826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.229838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.229851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4976 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.229864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.229879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.229890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.229903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4984 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.229916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.229931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.229943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.229954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.229968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.229983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.229996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5000 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4312 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4328 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4336 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4344 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4360 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5008 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5016 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.727 [2024-05-13 03:08:30.230571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.727 [2024-05-13 03:08:30.230583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5032 len:8 PRP1 0x0 PRP2 0x0 00:27:50.727 [2024-05-13 03:08:30.230599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-05-13 03:08:30.230614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.728 [2024-05-13 03:08:30.230625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.728 [2024-05-13 03:08:30.230636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5040 len:8 PRP1 0x0 PRP2 0x0 00:27:50.728 [2024-05-13 03:08:30.230649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:30.230664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.728 [2024-05-13 03:08:30.230676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.728 [2024-05-13 03:08:30.230687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5048 len:8 PRP1 0x0 PRP2 0x0 00:27:50.728 [2024-05-13 03:08:30.230727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:30.230792] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f09f0 was disconnected and freed. reset controller. 00:27:50.728 [2024-05-13 03:08:30.230811] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:50.728 [2024-05-13 03:08:30.230826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.728 [2024-05-13 03:08:30.230881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c4120 (9): Bad file descriptor 00:27:50.728 [2024-05-13 03:08:30.234203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.728 [2024-05-13 03:08:30.397061] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:50.728 [2024-05-13 03:08:34.770607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.770968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.770985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.728 [2024-05-13 03:08:34.771423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.728 [2024-05-13 03:08:34.771452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.728 [2024-05-13 03:08:34.771480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.728 [2024-05-13 03:08:34.771508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.728 [2024-05-13 03:08:34.771536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.728 [2024-05-13 03:08:34.771564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.728 [2024-05-13 03:08:34.771593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.728 [2024-05-13 03:08:34.771624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-05-13 03:08:34.771737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-05-13 03:08:34.771769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.771784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.771802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.771818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.771834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.771848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.771863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.771878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.771893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.771908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.771923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.771937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.771953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.771967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.771983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-05-13 03:08:34.772401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.772966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.772995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.773010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.729 [2024-05-13 03:08:34.773024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-05-13 03:08:34.773038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.730 [2024-05-13 03:08:34.773626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.773972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.773986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.774021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.774036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.774051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.774065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.774080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.774094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.774109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-05-13 03:08:34.774123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-05-13 03:08:34.774138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-05-13 03:08:34.774589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6ff0 is same with the state(5) to be set 00:27:50.731 [2024-05-13 03:08:34.774621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.731 [2024-05-13 03:08:34.774633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.731 [2024-05-13 03:08:34.774646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126072 len:8 PRP1 0x0 PRP2 0x0 00:27:50.731 [2024-05-13 03:08:34.774659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774742] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14e6ff0 was disconnected and freed. reset controller. 00:27:50.731 [2024-05-13 03:08:34.774764] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:50.731 [2024-05-13 03:08:34.774798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.731 [2024-05-13 03:08:34.774817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.731 [2024-05-13 03:08:34.774846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.731 [2024-05-13 03:08:34.774879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.731 [2024-05-13 03:08:34.774906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-05-13 03:08:34.774920] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.731 [2024-05-13 03:08:34.774972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c4120 (9): Bad file descriptor 00:27:50.731 [2024-05-13 03:08:34.778298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.731 [2024-05-13 03:08:34.808410] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:50.731 00:27:50.731 Latency(us) 00:27:50.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.731 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:50.731 Verification LBA range: start 0x0 length 0x4000 00:27:50.731 NVMe0n1 : 15.01 8942.34 34.93 1031.90 0.00 12805.66 1080.13 20971.52 00:27:50.731 =================================================================================================================== 00:27:50.731 Total : 8942.34 34.93 1031.90 0.00 12805.66 1080.13 20971.52 00:27:50.731 Received shutdown signal, test time was about 15.000000 seconds 00:27:50.731 00:27:50.731 Latency(us) 00:27:50.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.731 =================================================================================================================== 00:27:50.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=446510 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 446510 /var/tmp/bdevperf.sock 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 446510 ']' 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:50.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:50.731 03:08:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:50.731 03:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:50.731 03:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:27:50.731 03:08:41 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:50.731 [2024-05-13 03:08:41.272636] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:50.731 03:08:41 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:50.989 [2024-05-13 03:08:41.513344] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:50.989 03:08:41 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:51.252 NVMe0n1 00:27:51.252 03:08:41 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:51.510 00:27:51.510 03:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:52.075 00:27:52.075 03:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:52.075 03:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:52.332 03:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:52.590 03:08:43 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:55.867 03:08:46 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:55.867 03:08:46 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:55.867 03:08:46 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=447180 00:27:55.867 03:08:46 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:55.867 03:08:46 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 447180 00:27:56.801 0 00:27:56.801 03:08:47 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:56.801 [2024-05-13 03:08:40.787451] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:27:56.801 [2024-05-13 03:08:40.787556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446510 ] 00:27:56.801 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.801 [2024-05-13 03:08:40.818747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:56.801 [2024-05-13 03:08:40.846756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.801 [2024-05-13 03:08:40.930289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.801 [2024-05-13 03:08:43.207216] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:56.801 [2024-05-13 03:08:43.207311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.801 [2024-05-13 03:08:43.207334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.801 [2024-05-13 03:08:43.207352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.801 [2024-05-13 03:08:43.207365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.801 [2024-05-13 03:08:43.207379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.801 [2024-05-13 03:08:43.207392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.801 [2024-05-13 03:08:43.207406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.801 [2024-05-13 03:08:43.207420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.801 [2024-05-13 03:08:43.207434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.801 [2024-05-13 03:08:43.207471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.801 [2024-05-13 03:08:43.207502] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b3120 (9): Bad file descriptor 00:27:56.801 [2024-05-13 03:08:43.299946] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:56.801 Running I/O for 1 seconds... 00:27:56.801 00:27:56.801 Latency(us) 00:27:56.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.801 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:56.801 Verification LBA range: start 0x0 length 0x4000 00:27:56.801 NVMe0n1 : 1.01 8980.28 35.08 0.00 0.00 14185.19 1929.67 18835.53 00:27:56.801 =================================================================================================================== 00:27:56.801 Total : 8980.28 35.08 0.00 0.00 14185.19 1929.67 18835.53 00:27:56.801 03:08:47 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:56.801 03:08:47 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:57.059 03:08:47 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:57.316 03:08:48 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:57.316 03:08:48 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:57.573 03:08:48 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:57.831 03:08:48 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 446510 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 446510 ']' 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 446510 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 446510 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 446510' 00:28:01.109 killing process with pid 446510 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 446510 00:28:01.109 03:08:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 446510 00:28:01.367 03:08:52 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:01.367 03:08:52 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.625 rmmod nvme_tcp 00:28:01.625 rmmod nvme_fabrics 00:28:01.625 rmmod nvme_keyring 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 444372 ']' 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 444372 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 444372 ']' 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 444372 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.625 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 444372 00:28:01.626 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:01.626 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:01.626 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 444372' 00:28:01.626 killing process with pid 444372 00:28:01.626 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 444372 00:28:01.626 [2024-05-13 03:08:52.370599] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:01.626 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 444372 00:28:01.884 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:01.884 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:01.884 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:01.884 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.884 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.884 03:08:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.884 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.884 03:08:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.418 03:08:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:04.418 00:28:04.418 real 0m34.734s 00:28:04.418 user 1m57.508s 00:28:04.418 sys 0m7.033s 00:28:04.418 03:08:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:04.418 03:08:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:04.418 ************************************ 00:28:04.418 END TEST nvmf_failover 00:28:04.418 ************************************ 00:28:04.418 03:08:54 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:04.418 03:08:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:04.418 03:08:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:04.418 03:08:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:04.418 ************************************ 00:28:04.418 START TEST nvmf_host_discovery 00:28:04.418 ************************************ 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:04.418 * Looking for test storage... 00:28:04.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:04.418 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:04.419 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:04.419 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.419 03:08:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.419 03:08:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.419 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:04.419 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:04.419 03:08:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:28:04.419 03:08:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:28:06.320 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:06.321 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:06.321 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:06.321 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:06.321 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:06.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:28:06.321 00:28:06.321 --- 10.0.0.2 ping statistics --- 00:28:06.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.321 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:28:06.321 00:28:06.321 --- 10.0.0.1 ping statistics --- 00:28:06.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.321 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=449785 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 449785 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 449785 ']' 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:06.321 03:08:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.321 [2024-05-13 03:08:57.024754] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:28:06.321 [2024-05-13 03:08:57.024848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.321 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.321 [2024-05-13 03:08:57.063631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:06.321 [2024-05-13 03:08:57.095529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.580 [2024-05-13 03:08:57.189646] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.580 [2024-05-13 03:08:57.189707] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.580 [2024-05-13 03:08:57.189733] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.580 [2024-05-13 03:08:57.189747] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.580 [2024-05-13 03:08:57.189773] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.580 [2024-05-13 03:08:57.189812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.580 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:06.580 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.581 [2024-05-13 03:08:57.334047] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.581 [2024-05-13 03:08:57.341969] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:06.581 [2024-05-13 03:08:57.342265] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.581 null0 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.581 null1 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=449924 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 449924 /tmp/host.sock 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 449924 ']' 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:06.581 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:06.581 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.839 [2024-05-13 03:08:57.417817] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:28:06.839 [2024-05-13 03:08:57.417898] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449924 ] 00:28:06.839 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.839 [2024-05-13 03:08:57.455317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:06.839 [2024-05-13 03:08:57.485216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.839 [2024-05-13 03:08:57.575506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:07.098 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:07.099 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.357 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.358 [2024-05-13 03:08:57.967903] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:07.358 03:08:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:28:07.358 03:08:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:28:08.291 [2024-05-13 03:08:58.754301] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:08.291 [2024-05-13 03:08:58.754339] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:08.291 [2024-05-13 03:08:58.754370] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:08.291 [2024-05-13 03:08:58.882791] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:08.291 [2024-05-13 03:08:58.982774] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:08.291 [2024-05-13 03:08:58.982800] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:08.550 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:08.551 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:08.827 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.828 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:08.828 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.828 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.099 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:09.099 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:09.099 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:28:09.099 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:09.099 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:09.099 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.099 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.099 [2024-05-13 03:08:59.632855] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:09.099 [2024-05-13 03:08:59.633578] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:09.099 [2024-05-13 03:08:59.633615] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:09.099 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:09.100 03:08:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:28:09.100 [2024-05-13 03:08:59.761443] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:09.100 [2024-05-13 03:08:59.865223] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:09.100 [2024-05-13 03:08:59.865248] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:09.100 [2024-05-13 03:08:59.865259] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.034 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.292 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:10.292 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:10.292 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:28:10.292 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:10.292 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.293 [2024-05-13 03:09:00.845418] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:10.293 [2024-05-13 03:09:00.845461] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:10.293 [2024-05-13 03:09:00.853568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.293 [2024-05-13 03:09:00.853607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.293 [2024-05-13 03:09:00.853626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.293 [2024-05-13 03:09:00.853641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.293 [2024-05-13 03:09:00.853657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.293 [2024-05-13 03:09:00.853671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.293 [2024-05-13 03:09:00.853686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.293 [2024-05-13 03:09:00.853710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.293 [2024-05-13 03:09:00.853742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83540 is same with the state(5) to be set 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.293 [2024-05-13 03:09:00.863570] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83540 (9): Bad file descriptor 00:28:10.293 [2024-05-13 03:09:00.873622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:10.293 [2024-05-13 03:09:00.874008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.874244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.874271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c83540 with addr=10.0.0.2, port=4420 00:28:10.293 [2024-05-13 03:09:00.874288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83540 is same with the state(5) to be set 00:28:10.293 [2024-05-13 03:09:00.874311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83540 (9): Bad file descriptor 00:28:10.293 [2024-05-13 03:09:00.874348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:10.293 [2024-05-13 03:09:00.874367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:10.293 [2024-05-13 03:09:00.874399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:10.293 [2024-05-13 03:09:00.874420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.293 [2024-05-13 03:09:00.883715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:10.293 [2024-05-13 03:09:00.884046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.884303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.884329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c83540 with addr=10.0.0.2, port=4420 00:28:10.293 [2024-05-13 03:09:00.884345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83540 is same with the state(5) to be set 00:28:10.293 [2024-05-13 03:09:00.884368] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83540 (9): Bad file descriptor 00:28:10.293 [2024-05-13 03:09:00.884403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:10.293 [2024-05-13 03:09:00.884425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:10.293 [2024-05-13 03:09:00.884453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:10.293 [2024-05-13 03:09:00.884472] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:10.293 [2024-05-13 03:09:00.893806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:10.293 [2024-05-13 03:09:00.894112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:10.293 [2024-05-13 03:09:00.894374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.894403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c83540 with addr=10.0.0.2, port=4420 00:28:10.293 [2024-05-13 03:09:00.894419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83540 is same with the state(5) to be set 00:28:10.293 [2024-05-13 03:09:00.894441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83540 (9): Bad file descriptor 00:28:10.293 [2024-05-13 03:09:00.894477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:10.293 [2024-05-13 03:09:00.894510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:10.293 [2024-05-13 03:09:00.894524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:10.293 [2024-05-13 03:09:00.894544] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.293 [2024-05-13 03:09:00.903898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:10.293 [2024-05-13 03:09:00.904116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.904369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.904396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c83540 with addr=10.0.0.2, port=4420 00:28:10.293 [2024-05-13 03:09:00.904412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83540 is same with the state(5) to be set 00:28:10.293 [2024-05-13 03:09:00.904434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83540 (9): Bad file descriptor 00:28:10.293 [2024-05-13 03:09:00.904467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:10.293 [2024-05-13 03:09:00.904491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:10.293 [2024-05-13 03:09:00.904505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:10.293 [2024-05-13 03:09:00.904525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.293 [2024-05-13 03:09:00.913972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:10.293 [2024-05-13 03:09:00.914235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.914437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.914465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c83540 with addr=10.0.0.2, port=4420 00:28:10.293 [2024-05-13 03:09:00.914480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83540 is same with the state(5) to be set 00:28:10.293 [2024-05-13 03:09:00.914518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83540 (9): Bad file descriptor 00:28:10.293 [2024-05-13 03:09:00.914564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:10.293 [2024-05-13 03:09:00.914583] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:10.293 [2024-05-13 03:09:00.914596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:10.293 [2024-05-13 03:09:00.914614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.293 [2024-05-13 03:09:00.924044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:10.293 [2024-05-13 03:09:00.924334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.924551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.293 [2024-05-13 03:09:00.924581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c83540 with addr=10.0.0.2, port=4420 00:28:10.293 [2024-05-13 03:09:00.924599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83540 is same with the state(5) to be set 00:28:10.293 [2024-05-13 03:09:00.924625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c83540 (9): Bad file descriptor 00:28:10.293 [2024-05-13 03:09:00.924662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:10.293 [2024-05-13 03:09:00.924682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:10.293 [2024-05-13 03:09:00.924705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:10.293 [2024-05-13 03:09:00.924729] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:10.293 [2024-05-13 03:09:00.932682] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:10.293 [2024-05-13 03:09:00.932722] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:10.293 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.294 03:09:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:10.294 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.552 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:10.553 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:10.553 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:28:10.553 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:28:10.553 03:09:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:10.553 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.553 03:09:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.486 [2024-05-13 03:09:02.202944] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:11.486 [2024-05-13 03:09:02.202999] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:11.486 [2024-05-13 03:09:02.203022] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:11.743 [2024-05-13 03:09:02.290293] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:12.002 [2024-05-13 03:09:02.557280] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:12.002 [2024-05-13 03:09:02.557330] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.002 request: 00:28:12.002 { 00:28:12.002 "name": "nvme", 00:28:12.002 "trtype": "tcp", 00:28:12.002 "traddr": "10.0.0.2", 00:28:12.002 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:12.002 "adrfam": "ipv4", 00:28:12.002 "trsvcid": "8009", 00:28:12.002 "wait_for_attach": true, 00:28:12.002 "method": "bdev_nvme_start_discovery", 00:28:12.002 "req_id": 1 00:28:12.002 } 00:28:12.002 Got JSON-RPC error response 00:28:12.002 response: 00:28:12.002 { 00:28:12.002 "code": -17, 00:28:12.002 "message": "File exists" 00:28:12.002 } 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.002 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.002 request: 00:28:12.002 { 00:28:12.002 "name": "nvme_second", 00:28:12.002 "trtype": "tcp", 00:28:12.002 "traddr": "10.0.0.2", 00:28:12.002 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:12.002 "adrfam": "ipv4", 00:28:12.002 "trsvcid": "8009", 00:28:12.002 "wait_for_attach": true, 00:28:12.002 "method": "bdev_nvme_start_discovery", 00:28:12.002 "req_id": 1 00:28:12.002 } 00:28:12.002 Got JSON-RPC error response 00:28:12.002 response: 00:28:12.002 { 00:28:12.002 "code": -17, 00:28:12.002 "message": "File exists" 00:28:12.003 } 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.003 03:09:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.377 [2024-05-13 03:09:03.768899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.377 [2024-05-13 03:09:03.769217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.377 [2024-05-13 03:09:03.769248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91270 with addr=10.0.0.2, port=8010 00:28:13.377 [2024-05-13 03:09:03.769282] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:13.377 [2024-05-13 03:09:03.769298] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:13.377 [2024-05-13 03:09:03.769314] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:14.308 [2024-05-13 03:09:04.771376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.308 [2024-05-13 03:09:04.771673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.308 [2024-05-13 03:09:04.771713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb58f0 with addr=10.0.0.2, port=8010 00:28:14.308 [2024-05-13 03:09:04.771764] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:14.308 [2024-05-13 03:09:04.771779] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:14.308 [2024-05-13 03:09:04.771793] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:15.238 [2024-05-13 03:09:05.773454] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:15.238 request: 00:28:15.238 { 00:28:15.238 "name": "nvme_second", 00:28:15.238 "trtype": "tcp", 00:28:15.238 "traddr": "10.0.0.2", 00:28:15.238 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:15.238 "adrfam": "ipv4", 00:28:15.238 "trsvcid": "8010", 00:28:15.238 "attach_timeout_ms": 3000, 00:28:15.238 "method": "bdev_nvme_start_discovery", 00:28:15.238 "req_id": 1 00:28:15.238 } 00:28:15.238 Got JSON-RPC error response 00:28:15.238 response: 00:28:15.238 { 00:28:15.238 "code": -110, 00:28:15.238 "message": "Connection timed out" 00:28:15.238 } 00:28:15.238 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:15.238 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:28:15.238 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:15.238 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:15.238 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 449924 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.239 rmmod nvme_tcp 00:28:15.239 rmmod nvme_fabrics 00:28:15.239 rmmod nvme_keyring 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 449785 ']' 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 449785 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 449785 ']' 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 449785 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 449785 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 449785' 00:28:15.239 killing process with pid 449785 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 449785 00:28:15.239 [2024-05-13 03:09:05.916887] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:15.239 03:09:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 449785 00:28:15.497 03:09:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:15.497 03:09:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:15.497 03:09:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:15.497 03:09:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:15.497 03:09:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:15.497 03:09:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.497 03:09:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.497 03:09:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.028 03:09:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:18.028 00:28:18.028 real 0m13.490s 00:28:18.028 user 0m19.449s 00:28:18.028 sys 0m2.897s 00:28:18.028 03:09:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:18.028 03:09:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.028 ************************************ 00:28:18.028 END TEST nvmf_host_discovery 00:28:18.028 ************************************ 00:28:18.028 03:09:08 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:18.028 03:09:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:18.029 03:09:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:18.029 03:09:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:18.029 ************************************ 00:28:18.029 START TEST nvmf_host_multipath_status 00:28:18.029 ************************************ 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:18.029 * Looking for test storage... 00:28:18.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:28:18.029 03:09:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:19.929 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:19.929 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:19.929 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:19.929 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:19.929 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:19.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:28:19.930 00:28:19.930 --- 10.0.0.2 ping statistics --- 00:28:19.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.930 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:28:19.930 00:28:19.930 --- 10.0.0.1 ping statistics --- 00:28:19.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.930 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=453571 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 453571 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 453571 ']' 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 [2024-05-13 03:09:10.422309] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:28:19.930 [2024-05-13 03:09:10.422381] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.930 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.930 [2024-05-13 03:09:10.460185] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:19.930 [2024-05-13 03:09:10.491525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:19.930 [2024-05-13 03:09:10.584384] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.930 [2024-05-13 03:09:10.584442] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.930 [2024-05-13 03:09:10.584458] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.930 [2024-05-13 03:09:10.584472] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.930 [2024-05-13 03:09:10.584483] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.930 [2024-05-13 03:09:10.584582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.930 [2024-05-13 03:09:10.584588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=453571 00:28:19.930 03:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:20.503 [2024-05-13 03:09:11.000059] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.503 03:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:20.503 Malloc0 00:28:20.762 03:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:21.019 03:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.019 03:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.276 [2024-05-13 03:09:12.046475] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:21.276 [2024-05-13 03:09:12.046801] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.276 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:21.533 [2024-05-13 03:09:12.291377] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=453853 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 453853 /var/tmp/bdevperf.sock 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 453853 ']' 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:21.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:21.533 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:22.098 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:22.098 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:28:22.098 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:22.098 03:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:28:22.662 Nvme0n1 00:28:22.663 03:09:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:23.227 Nvme0n1 00:28:23.227 03:09:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:23.227 03:09:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:25.128 03:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:25.128 03:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:25.386 03:09:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:25.644 03:09:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:26.578 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:26.578 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:26.578 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.579 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:26.836 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.836 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:26.836 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.836 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:27.094 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:27.094 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:27.094 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.094 03:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:27.352 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.352 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:27.352 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.352 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:27.610 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.610 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:27.610 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.610 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:27.868 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.868 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:27.868 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.868 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:28.126 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.126 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:28.126 03:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:28.384 03:09:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:28.642 03:09:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:29.577 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:29.578 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:29.578 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.578 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:29.836 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:29.836 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:29.836 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.836 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:30.094 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.094 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:30.094 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.094 03:09:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:30.353 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.353 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:30.353 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.353 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:30.611 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.611 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:30.611 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.611 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:30.869 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.869 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:30.869 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.869 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:31.127 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.127 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:31.127 03:09:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:31.385 03:09:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:31.644 03:09:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:32.578 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:32.578 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:32.578 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.578 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:32.836 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.836 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:32.836 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.836 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:33.101 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:33.101 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:33.101 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.101 03:09:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:33.358 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:33.358 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:33.358 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.358 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:33.615 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:33.615 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:33.615 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.615 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:33.873 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:33.873 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:33.873 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.873 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:34.131 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:34.131 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:34.131 03:09:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:34.390 03:09:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:34.648 03:09:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:35.582 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:35.582 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:35.582 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.582 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:35.841 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.841 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:35.841 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.841 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:36.099 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:36.099 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:36.099 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.099 03:09:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:36.357 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.357 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:36.357 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.357 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:36.615 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.615 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:36.615 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.615 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:36.873 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.873 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:36.873 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.873 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:37.130 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:37.130 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:37.130 03:09:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:37.388 03:09:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:37.660 03:09:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:38.633 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:38.633 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:38.633 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.633 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:38.891 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:38.891 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:38.891 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.891 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:39.149 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:39.149 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:39.149 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.149 03:09:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:39.407 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.407 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:39.407 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.407 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:39.664 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.664 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:39.664 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.664 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:39.922 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:39.922 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:39.922 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.922 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:40.178 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:40.179 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:40.179 03:09:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:40.436 03:09:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:40.694 03:09:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:41.628 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:41.628 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:41.628 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.628 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:41.886 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:41.886 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:41.886 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.886 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:42.144 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.144 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:42.144 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.144 03:09:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:42.403 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.403 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:42.403 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.403 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:42.661 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.661 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:42.661 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.661 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:42.918 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:42.918 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:42.918 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.918 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:43.175 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.175 03:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:43.432 03:09:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:43.432 03:09:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:43.689 03:09:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:43.948 03:09:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:44.880 03:09:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:44.880 03:09:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:44.880 03:09:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.880 03:09:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:45.138 03:09:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.138 03:09:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:45.138 03:09:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.138 03:09:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:45.396 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.396 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:45.396 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.396 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:45.655 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.655 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:45.655 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.655 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:45.913 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.913 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:45.913 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.913 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:46.171 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.172 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:46.172 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.172 03:09:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:46.430 03:09:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.430 03:09:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:46.430 03:09:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:46.688 03:09:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:46.946 03:09:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:47.881 03:09:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:47.881 03:09:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:48.140 03:09:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.140 03:09:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:48.398 03:09:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:48.398 03:09:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:48.398 03:09:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.398 03:09:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:48.398 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.398 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:48.398 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.398 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:48.657 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.657 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:48.657 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.657 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:48.915 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.915 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:48.915 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.915 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:49.173 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.173 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:49.173 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.173 03:09:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:49.432 03:09:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.432 03:09:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:49.432 03:09:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:49.690 03:09:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:49.949 03:09:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:51.356 03:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:51.356 03:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:51.356 03:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.356 03:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:51.356 03:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.356 03:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:51.356 03:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.356 03:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:51.615 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.615 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:51.615 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.615 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:51.873 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.873 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:51.873 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.873 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:52.131 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.131 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:52.131 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.131 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:52.389 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.389 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:52.389 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.389 03:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:52.647 03:09:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.647 03:09:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:52.647 03:09:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:52.905 03:09:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:53.163 03:09:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:54.094 03:09:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:54.094 03:09:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:54.094 03:09:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.094 03:09:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:54.351 03:09:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.351 03:09:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:54.351 03:09:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.351 03:09:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:54.608 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:54.608 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:54.608 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.608 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:54.865 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.865 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:54.865 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.865 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:55.123 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:55.123 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:55.123 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.123 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:55.381 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:55.381 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:55.381 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.381 03:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 453853 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 453853 ']' 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 453853 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 453853 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 453853' 00:28:55.639 killing process with pid 453853 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 453853 00:28:55.639 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 453853 00:28:55.639 Connection closed with partial response: 00:28:55.639 00:28:55.639 00:28:55.900 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 453853 00:28:55.900 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:55.900 [2024-05-13 03:09:12.349543] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:28:55.900 [2024-05-13 03:09:12.349644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453853 ] 00:28:55.900 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.900 [2024-05-13 03:09:12.382404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:55.900 [2024-05-13 03:09:12.410624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.900 [2024-05-13 03:09:12.497731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.900 Running I/O for 90 seconds... 00:28:55.900 [2024-05-13 03:09:28.053485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.900 [2024-05-13 03:09:28.053536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:55.900 [2024-05-13 03:09:28.053605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.900 [2024-05-13 03:09:28.053625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.053647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.053663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.053709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.053727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.053764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.053781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.053804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.053821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.053843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.053861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.053883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.053898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.053920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.053935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.053957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.053972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.054019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.054036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.054057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.054087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.054108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.054124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.054145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.054160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.054181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.054196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.054217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.054232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.055959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.055983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.056014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.056039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.056059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.056083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.056114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.056137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.056167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.056192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.056208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.056231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.056247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:55.901 [2024-05-13 03:09:28.056271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.901 [2024-05-13 03:09:28.056288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.902 [2024-05-13 03:09:28.056420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.056956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.056997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.057917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.057938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.058060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.058096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.058129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.058147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.058175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.058192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.058220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.058237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.058265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.058282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.058310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.058326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.058354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.058370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:55.902 [2024-05-13 03:09:28.058416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.902 [2024-05-13 03:09:28.058433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.058950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.058966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.059943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.059985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.903 [2024-05-13 03:09:28.060401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:55.903 [2024-05-13 03:09:28.060428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:28.060447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:28.060475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:28.060492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.696617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.696684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.696764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.696786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.696811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.696830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.696954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.696982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.697432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.697471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.697688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.697713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.698549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.698573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.698616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.698633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.698655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.698670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.698717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.698735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.698757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.698773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.698795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.698810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.698832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.698848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.698870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.698885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.698907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.904 [2024-05-13 03:09:43.698923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.700243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.700284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.700318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.700337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.700360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.700377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.700399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.700416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.700447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.700464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.700487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.700503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:55.904 [2024-05-13 03:09:43.700526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.904 [2024-05-13 03:09:43.700543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:55.904 Received shutdown signal, test time was about 32.296896 seconds 00:28:55.904 00:28:55.905 Latency(us) 00:28:55.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.905 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:55.905 Verification LBA range: start 0x0 length 0x4000 00:28:55.905 Nvme0n1 : 32.30 8066.61 31.51 0.00 0.00 15842.08 321.61 4026531.84 00:28:55.905 =================================================================================================================== 00:28:55.905 Total : 8066.61 31.51 0.00 0.00 15842.08 321.61 4026531.84 00:28:55.905 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:56.163 rmmod nvme_tcp 00:28:56.163 rmmod nvme_fabrics 00:28:56.163 rmmod nvme_keyring 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 453571 ']' 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 453571 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 453571 ']' 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 453571 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:56.163 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 453571 00:28:56.164 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:56.164 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:56.164 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 453571' 00:28:56.164 killing process with pid 453571 00:28:56.164 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 453571 00:28:56.164 [2024-05-13 03:09:46.816847] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:56.164 03:09:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 453571 00:28:56.422 03:09:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:56.422 03:09:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:56.422 03:09:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:56.422 03:09:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:56.422 03:09:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:56.422 03:09:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.422 03:09:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:56.422 03:09:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.327 03:09:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:58.327 00:28:58.327 real 0m40.867s 00:28:58.327 user 2m3.287s 00:28:58.327 sys 0m10.493s 00:28:58.327 03:09:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:58.327 03:09:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:58.327 ************************************ 00:28:58.327 END TEST nvmf_host_multipath_status 00:28:58.327 ************************************ 00:28:58.586 03:09:49 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:58.586 03:09:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:58.586 03:09:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:58.586 03:09:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:58.586 ************************************ 00:28:58.586 START TEST nvmf_discovery_remove_ifc 00:28:58.586 ************************************ 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:58.586 * Looking for test storage... 00:28:58.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:28:58.586 03:09:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:00.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:00.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:00.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:00.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.488 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:00.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:29:00.489 00:29:00.489 --- 10.0.0.2 ping statistics --- 00:29:00.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.489 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:29:00.489 00:29:00.489 --- 10.0.0.1 ping statistics --- 00:29:00.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.489 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=459964 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:00.489 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 459964 00:29:00.747 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 459964 ']' 00:29:00.747 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.747 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:00.747 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.747 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:00.747 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.747 [2024-05-13 03:09:51.331435] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:29:00.747 [2024-05-13 03:09:51.331531] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.747 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.747 [2024-05-13 03:09:51.369464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:00.747 [2024-05-13 03:09:51.397131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.747 [2024-05-13 03:09:51.481314] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.747 [2024-05-13 03:09:51.481368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.747 [2024-05-13 03:09:51.481391] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.747 [2024-05-13 03:09:51.481402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.747 [2024-05-13 03:09:51.481411] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.747 [2024-05-13 03:09:51.481436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.005 [2024-05-13 03:09:51.615306] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.005 [2024-05-13 03:09:51.623234] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:01.005 [2024-05-13 03:09:51.623509] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:01.005 null0 00:29:01.005 [2024-05-13 03:09:51.655422] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=460075 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 460075 /tmp/host.sock 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 460075 ']' 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:01.005 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:01.005 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.005 [2024-05-13 03:09:51.718315] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:29:01.005 [2024-05-13 03:09:51.718378] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460075 ] 00:29:01.006 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.006 [2024-05-13 03:09:51.750402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:01.006 [2024-05-13 03:09:51.780857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.263 [2024-05-13 03:09:51.872424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.263 03:09:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.263 03:09:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.263 03:09:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:01.263 03:09:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.263 03:09:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:02.636 [2024-05-13 03:09:53.075926] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:02.636 [2024-05-13 03:09:53.075958] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:02.636 [2024-05-13 03:09:53.075982] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:02.636 [2024-05-13 03:09:53.162280] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:02.636 [2024-05-13 03:09:53.263866] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:02.636 [2024-05-13 03:09:53.263926] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:02.636 [2024-05-13 03:09:53.263962] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:02.636 [2024-05-13 03:09:53.263984] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:02.636 [2024-05-13 03:09:53.264028] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:02.636 [2024-05-13 03:09:53.271886] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x971ec0 was disconnected and freed. delete nvme_qpair. 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:02.636 03:09:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:04.007 03:09:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:04.941 03:09:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:05.875 03:09:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:06.852 03:09:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:08.226 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:08.227 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.227 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.227 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:08.227 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:08.227 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:08.227 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:08.227 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.227 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:08.227 03:09:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:08.227 [2024-05-13 03:09:58.705479] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:08.227 [2024-05-13 03:09:58.705551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.227 [2024-05-13 03:09:58.705577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.227 [2024-05-13 03:09:58.705599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.227 [2024-05-13 03:09:58.705614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.227 [2024-05-13 03:09:58.705629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.227 [2024-05-13 03:09:58.705644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.227 [2024-05-13 03:09:58.705659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.227 [2024-05-13 03:09:58.705674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.227 [2024-05-13 03:09:58.705689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.227 [2024-05-13 03:09:58.705711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.227 [2024-05-13 03:09:58.705726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x939060 is same with the state(5) to be set 00:29:08.227 [2024-05-13 03:09:58.715497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x939060 (9): Bad file descriptor 00:29:08.227 [2024-05-13 03:09:58.725548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:09.161 03:09:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:09.161 03:09:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:09.161 03:09:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.161 03:09:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:09.161 03:09:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:09.161 03:09:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:09.161 03:09:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:09.161 [2024-05-13 03:09:59.750734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:10.092 [2024-05-13 03:10:00.774741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:10.092 [2024-05-13 03:10:00.774809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x939060 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-05-13 03:10:00.774840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x939060 is same with the state(5) to be set 00:29:10.092 [2024-05-13 03:10:00.775340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x939060 (9): Bad file descriptor 00:29:10.092 [2024-05-13 03:10:00.775390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-05-13 03:10:00.775429] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:10.092 [2024-05-13 03:10:00.775472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.092 [2024-05-13 03:10:00.775496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.092 [2024-05-13 03:10:00.775519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.092 [2024-05-13 03:10:00.775535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.092 [2024-05-13 03:10:00.775550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.092 [2024-05-13 03:10:00.775564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.092 [2024-05-13 03:10:00.775579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.092 [2024-05-13 03:10:00.775593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.092 [2024-05-13 03:10:00.775608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.092 [2024-05-13 03:10:00.775622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.092 [2024-05-13 03:10:00.775637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:10.092 [2024-05-13 03:10:00.775886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9384f0 (9): Bad file descriptor 00:29:10.092 [2024-05-13 03:10:00.776908] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:10.092 [2024-05-13 03:10:00.776931] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:10.092 03:10:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.092 03:10:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:10.092 03:10:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:11.024 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:11.024 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:11.025 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:11.025 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.025 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:11.025 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:11.025 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:11.025 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:11.283 03:10:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:12.217 [2024-05-13 03:10:02.830063] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:12.217 [2024-05-13 03:10:02.830097] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:12.217 [2024-05-13 03:10:02.830123] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.217 [2024-05-13 03:10:02.956546] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:12.217 03:10:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:12.217 [2024-05-13 03:10:03.018719] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:12.217 [2024-05-13 03:10:03.018782] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:12.217 [2024-05-13 03:10:03.018833] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:12.217 [2024-05-13 03:10:03.018856] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:12.217 [2024-05-13 03:10:03.018868] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:12.475 [2024-05-13 03:10:03.028155] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x97cbf0 was disconnected and freed. delete nvme_qpair. 00:29:13.411 03:10:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:13.411 03:10:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:13.411 03:10:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:13.411 03:10:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.411 03:10:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.411 03:10:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:13.411 03:10:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:13.411 03:10:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 460075 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 460075 ']' 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 460075 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 460075 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 460075' 00:29:13.411 killing process with pid 460075 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 460075 00:29:13.411 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 460075 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:13.668 rmmod nvme_tcp 00:29:13.668 rmmod nvme_fabrics 00:29:13.668 rmmod nvme_keyring 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 459964 ']' 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 459964 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 459964 ']' 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 459964 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 459964 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:13.668 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:13.669 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 459964' 00:29:13.669 killing process with pid 459964 00:29:13.669 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 459964 00:29:13.669 [2024-05-13 03:10:04.343776] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:13.669 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 459964 00:29:13.927 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:13.927 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:13.927 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:13.927 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:13.927 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:13.927 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.927 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:13.927 03:10:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.457 03:10:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:16.457 00:29:16.457 real 0m17.455s 00:29:16.457 user 0m24.384s 00:29:16.457 sys 0m2.891s 00:29:16.457 03:10:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:16.457 03:10:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:16.457 ************************************ 00:29:16.457 END TEST nvmf_discovery_remove_ifc 00:29:16.457 ************************************ 00:29:16.457 03:10:06 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:16.457 03:10:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:16.457 03:10:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:16.457 03:10:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.457 ************************************ 00:29:16.457 START TEST nvmf_identify_kernel_target 00:29:16.457 ************************************ 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:16.457 * Looking for test storage... 00:29:16.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.457 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:29:16.458 03:10:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:17.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:17.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.831 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:17.832 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:17.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.832 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:18.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:29:18.090 00:29:18.090 --- 10.0.0.2 ping statistics --- 00:29:18.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.090 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:29:18.090 00:29:18.090 --- 10.0.0.1 ping statistics --- 00:29:18.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.090 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:18.090 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:18.091 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:18.091 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:18.091 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:18.091 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:29:18.091 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:18.091 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:18.091 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:18.091 03:10:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:19.025 Waiting for block devices as requested 00:29:19.025 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:19.283 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:19.283 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:19.283 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:19.283 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:19.540 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:19.540 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:19.540 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:19.540 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:19.797 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:19.797 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:19.797 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:19.797 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:20.054 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:20.054 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:20.054 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:20.054 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:20.314 No valid GPT data, bailing 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:29:20.314 03:10:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:20.314 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:20.314 00:29:20.314 Discovery Log Number of Records 2, Generation counter 2 00:29:20.314 =====Discovery Log Entry 0====== 00:29:20.314 trtype: tcp 00:29:20.314 adrfam: ipv4 00:29:20.314 subtype: current discovery subsystem 00:29:20.314 treq: not specified, sq flow control disable supported 00:29:20.314 portid: 1 00:29:20.314 trsvcid: 4420 00:29:20.314 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:20.314 traddr: 10.0.0.1 00:29:20.314 eflags: none 00:29:20.314 sectype: none 00:29:20.314 =====Discovery Log Entry 1====== 00:29:20.314 trtype: tcp 00:29:20.314 adrfam: ipv4 00:29:20.314 subtype: nvme subsystem 00:29:20.314 treq: not specified, sq flow control disable supported 00:29:20.314 portid: 1 00:29:20.314 trsvcid: 4420 00:29:20.314 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:20.314 traddr: 10.0.0.1 00:29:20.314 eflags: none 00:29:20.314 sectype: none 00:29:20.314 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:20.314 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:20.314 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.314 ===================================================== 00:29:20.314 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:20.314 ===================================================== 00:29:20.314 Controller Capabilities/Features 00:29:20.314 ================================ 00:29:20.314 Vendor ID: 0000 00:29:20.314 Subsystem Vendor ID: 0000 00:29:20.314 Serial Number: 7f3931fea669d0b9ccd9 00:29:20.314 Model Number: Linux 00:29:20.314 Firmware Version: 6.7.0-68 00:29:20.314 Recommended Arb Burst: 0 00:29:20.314 IEEE OUI Identifier: 00 00 00 00:29:20.314 Multi-path I/O 00:29:20.314 May have multiple subsystem ports: No 00:29:20.314 May have multiple controllers: No 00:29:20.314 Associated with SR-IOV VF: No 00:29:20.314 Max Data Transfer Size: Unlimited 00:29:20.314 Max Number of Namespaces: 0 00:29:20.314 Max Number of I/O Queues: 1024 00:29:20.314 NVMe Specification Version (VS): 1.3 00:29:20.314 NVMe Specification Version (Identify): 1.3 00:29:20.314 Maximum Queue Entries: 1024 00:29:20.315 Contiguous Queues Required: No 00:29:20.315 Arbitration Mechanisms Supported 00:29:20.315 Weighted Round Robin: Not Supported 00:29:20.315 Vendor Specific: Not Supported 00:29:20.315 Reset Timeout: 7500 ms 00:29:20.315 Doorbell Stride: 4 bytes 00:29:20.315 NVM Subsystem Reset: Not Supported 00:29:20.315 Command Sets Supported 00:29:20.315 NVM Command Set: Supported 00:29:20.315 Boot Partition: Not Supported 00:29:20.315 Memory Page Size Minimum: 4096 bytes 00:29:20.315 Memory Page Size Maximum: 4096 bytes 00:29:20.315 Persistent Memory Region: Not Supported 00:29:20.315 Optional Asynchronous Events Supported 00:29:20.315 Namespace Attribute Notices: Not Supported 00:29:20.315 Firmware Activation Notices: Not Supported 00:29:20.315 ANA Change Notices: Not Supported 00:29:20.315 PLE Aggregate Log Change Notices: Not Supported 00:29:20.315 LBA Status Info Alert Notices: Not Supported 00:29:20.315 EGE Aggregate Log Change Notices: Not Supported 00:29:20.315 Normal NVM Subsystem Shutdown event: Not Supported 00:29:20.315 Zone Descriptor Change Notices: Not Supported 00:29:20.315 Discovery Log Change Notices: Supported 00:29:20.315 Controller Attributes 00:29:20.315 128-bit Host Identifier: Not Supported 00:29:20.315 Non-Operational Permissive Mode: Not Supported 00:29:20.315 NVM Sets: Not Supported 00:29:20.315 Read Recovery Levels: Not Supported 00:29:20.315 Endurance Groups: Not Supported 00:29:20.315 Predictable Latency Mode: Not Supported 00:29:20.315 Traffic Based Keep ALive: Not Supported 00:29:20.315 Namespace Granularity: Not Supported 00:29:20.315 SQ Associations: Not Supported 00:29:20.315 UUID List: Not Supported 00:29:20.315 Multi-Domain Subsystem: Not Supported 00:29:20.315 Fixed Capacity Management: Not Supported 00:29:20.315 Variable Capacity Management: Not Supported 00:29:20.315 Delete Endurance Group: Not Supported 00:29:20.315 Delete NVM Set: Not Supported 00:29:20.315 Extended LBA Formats Supported: Not Supported 00:29:20.315 Flexible Data Placement Supported: Not Supported 00:29:20.315 00:29:20.315 Controller Memory Buffer Support 00:29:20.315 ================================ 00:29:20.315 Supported: No 00:29:20.315 00:29:20.315 Persistent Memory Region Support 00:29:20.315 ================================ 00:29:20.315 Supported: No 00:29:20.315 00:29:20.315 Admin Command Set Attributes 00:29:20.315 ============================ 00:29:20.315 Security Send/Receive: Not Supported 00:29:20.315 Format NVM: Not Supported 00:29:20.315 Firmware Activate/Download: Not Supported 00:29:20.315 Namespace Management: Not Supported 00:29:20.315 Device Self-Test: Not Supported 00:29:20.315 Directives: Not Supported 00:29:20.315 NVMe-MI: Not Supported 00:29:20.315 Virtualization Management: Not Supported 00:29:20.315 Doorbell Buffer Config: Not Supported 00:29:20.315 Get LBA Status Capability: Not Supported 00:29:20.315 Command & Feature Lockdown Capability: Not Supported 00:29:20.315 Abort Command Limit: 1 00:29:20.315 Async Event Request Limit: 1 00:29:20.315 Number of Firmware Slots: N/A 00:29:20.315 Firmware Slot 1 Read-Only: N/A 00:29:20.315 Firmware Activation Without Reset: N/A 00:29:20.315 Multiple Update Detection Support: N/A 00:29:20.315 Firmware Update Granularity: No Information Provided 00:29:20.315 Per-Namespace SMART Log: No 00:29:20.315 Asymmetric Namespace Access Log Page: Not Supported 00:29:20.315 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:20.315 Command Effects Log Page: Not Supported 00:29:20.315 Get Log Page Extended Data: Supported 00:29:20.315 Telemetry Log Pages: Not Supported 00:29:20.315 Persistent Event Log Pages: Not Supported 00:29:20.315 Supported Log Pages Log Page: May Support 00:29:20.315 Commands Supported & Effects Log Page: Not Supported 00:29:20.315 Feature Identifiers & Effects Log Page:May Support 00:29:20.315 NVMe-MI Commands & Effects Log Page: May Support 00:29:20.315 Data Area 4 for Telemetry Log: Not Supported 00:29:20.315 Error Log Page Entries Supported: 1 00:29:20.315 Keep Alive: Not Supported 00:29:20.315 00:29:20.315 NVM Command Set Attributes 00:29:20.315 ========================== 00:29:20.315 Submission Queue Entry Size 00:29:20.315 Max: 1 00:29:20.315 Min: 1 00:29:20.315 Completion Queue Entry Size 00:29:20.315 Max: 1 00:29:20.315 Min: 1 00:29:20.315 Number of Namespaces: 0 00:29:20.315 Compare Command: Not Supported 00:29:20.315 Write Uncorrectable Command: Not Supported 00:29:20.315 Dataset Management Command: Not Supported 00:29:20.315 Write Zeroes Command: Not Supported 00:29:20.315 Set Features Save Field: Not Supported 00:29:20.315 Reservations: Not Supported 00:29:20.315 Timestamp: Not Supported 00:29:20.315 Copy: Not Supported 00:29:20.315 Volatile Write Cache: Not Present 00:29:20.315 Atomic Write Unit (Normal): 1 00:29:20.315 Atomic Write Unit (PFail): 1 00:29:20.315 Atomic Compare & Write Unit: 1 00:29:20.315 Fused Compare & Write: Not Supported 00:29:20.315 Scatter-Gather List 00:29:20.315 SGL Command Set: Supported 00:29:20.315 SGL Keyed: Not Supported 00:29:20.315 SGL Bit Bucket Descriptor: Not Supported 00:29:20.315 SGL Metadata Pointer: Not Supported 00:29:20.315 Oversized SGL: Not Supported 00:29:20.315 SGL Metadata Address: Not Supported 00:29:20.315 SGL Offset: Supported 00:29:20.315 Transport SGL Data Block: Not Supported 00:29:20.315 Replay Protected Memory Block: Not Supported 00:29:20.315 00:29:20.315 Firmware Slot Information 00:29:20.315 ========================= 00:29:20.315 Active slot: 0 00:29:20.315 00:29:20.315 00:29:20.315 Error Log 00:29:20.315 ========= 00:29:20.315 00:29:20.315 Active Namespaces 00:29:20.315 ================= 00:29:20.315 Discovery Log Page 00:29:20.315 ================== 00:29:20.315 Generation Counter: 2 00:29:20.315 Number of Records: 2 00:29:20.315 Record Format: 0 00:29:20.315 00:29:20.315 Discovery Log Entry 0 00:29:20.315 ---------------------- 00:29:20.315 Transport Type: 3 (TCP) 00:29:20.315 Address Family: 1 (IPv4) 00:29:20.315 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:20.315 Entry Flags: 00:29:20.315 Duplicate Returned Information: 0 00:29:20.315 Explicit Persistent Connection Support for Discovery: 0 00:29:20.315 Transport Requirements: 00:29:20.315 Secure Channel: Not Specified 00:29:20.315 Port ID: 1 (0x0001) 00:29:20.315 Controller ID: 65535 (0xffff) 00:29:20.315 Admin Max SQ Size: 32 00:29:20.315 Transport Service Identifier: 4420 00:29:20.315 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:20.315 Transport Address: 10.0.0.1 00:29:20.315 Discovery Log Entry 1 00:29:20.315 ---------------------- 00:29:20.315 Transport Type: 3 (TCP) 00:29:20.315 Address Family: 1 (IPv4) 00:29:20.315 Subsystem Type: 2 (NVM Subsystem) 00:29:20.315 Entry Flags: 00:29:20.315 Duplicate Returned Information: 0 00:29:20.315 Explicit Persistent Connection Support for Discovery: 0 00:29:20.315 Transport Requirements: 00:29:20.315 Secure Channel: Not Specified 00:29:20.315 Port ID: 1 (0x0001) 00:29:20.315 Controller ID: 65535 (0xffff) 00:29:20.315 Admin Max SQ Size: 32 00:29:20.315 Transport Service Identifier: 4420 00:29:20.315 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:20.315 Transport Address: 10.0.0.1 00:29:20.315 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:20.315 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.315 get_feature(0x01) failed 00:29:20.315 get_feature(0x02) failed 00:29:20.315 get_feature(0x04) failed 00:29:20.315 ===================================================== 00:29:20.315 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:20.315 ===================================================== 00:29:20.315 Controller Capabilities/Features 00:29:20.315 ================================ 00:29:20.315 Vendor ID: 0000 00:29:20.315 Subsystem Vendor ID: 0000 00:29:20.315 Serial Number: 31fa7098707c057ebea2 00:29:20.315 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:20.315 Firmware Version: 6.7.0-68 00:29:20.315 Recommended Arb Burst: 6 00:29:20.315 IEEE OUI Identifier: 00 00 00 00:29:20.315 Multi-path I/O 00:29:20.315 May have multiple subsystem ports: Yes 00:29:20.315 May have multiple controllers: Yes 00:29:20.315 Associated with SR-IOV VF: No 00:29:20.315 Max Data Transfer Size: Unlimited 00:29:20.315 Max Number of Namespaces: 1024 00:29:20.315 Max Number of I/O Queues: 128 00:29:20.315 NVMe Specification Version (VS): 1.3 00:29:20.315 NVMe Specification Version (Identify): 1.3 00:29:20.315 Maximum Queue Entries: 1024 00:29:20.315 Contiguous Queues Required: No 00:29:20.315 Arbitration Mechanisms Supported 00:29:20.315 Weighted Round Robin: Not Supported 00:29:20.315 Vendor Specific: Not Supported 00:29:20.315 Reset Timeout: 7500 ms 00:29:20.315 Doorbell Stride: 4 bytes 00:29:20.315 NVM Subsystem Reset: Not Supported 00:29:20.315 Command Sets Supported 00:29:20.315 NVM Command Set: Supported 00:29:20.315 Boot Partition: Not Supported 00:29:20.315 Memory Page Size Minimum: 4096 bytes 00:29:20.316 Memory Page Size Maximum: 4096 bytes 00:29:20.316 Persistent Memory Region: Not Supported 00:29:20.316 Optional Asynchronous Events Supported 00:29:20.316 Namespace Attribute Notices: Supported 00:29:20.316 Firmware Activation Notices: Not Supported 00:29:20.316 ANA Change Notices: Supported 00:29:20.316 PLE Aggregate Log Change Notices: Not Supported 00:29:20.316 LBA Status Info Alert Notices: Not Supported 00:29:20.316 EGE Aggregate Log Change Notices: Not Supported 00:29:20.316 Normal NVM Subsystem Shutdown event: Not Supported 00:29:20.316 Zone Descriptor Change Notices: Not Supported 00:29:20.316 Discovery Log Change Notices: Not Supported 00:29:20.316 Controller Attributes 00:29:20.316 128-bit Host Identifier: Supported 00:29:20.316 Non-Operational Permissive Mode: Not Supported 00:29:20.316 NVM Sets: Not Supported 00:29:20.316 Read Recovery Levels: Not Supported 00:29:20.316 Endurance Groups: Not Supported 00:29:20.316 Predictable Latency Mode: Not Supported 00:29:20.316 Traffic Based Keep ALive: Supported 00:29:20.316 Namespace Granularity: Not Supported 00:29:20.316 SQ Associations: Not Supported 00:29:20.316 UUID List: Not Supported 00:29:20.316 Multi-Domain Subsystem: Not Supported 00:29:20.316 Fixed Capacity Management: Not Supported 00:29:20.316 Variable Capacity Management: Not Supported 00:29:20.316 Delete Endurance Group: Not Supported 00:29:20.316 Delete NVM Set: Not Supported 00:29:20.316 Extended LBA Formats Supported: Not Supported 00:29:20.316 Flexible Data Placement Supported: Not Supported 00:29:20.316 00:29:20.316 Controller Memory Buffer Support 00:29:20.316 ================================ 00:29:20.316 Supported: No 00:29:20.316 00:29:20.316 Persistent Memory Region Support 00:29:20.316 ================================ 00:29:20.316 Supported: No 00:29:20.316 00:29:20.316 Admin Command Set Attributes 00:29:20.316 ============================ 00:29:20.316 Security Send/Receive: Not Supported 00:29:20.316 Format NVM: Not Supported 00:29:20.316 Firmware Activate/Download: Not Supported 00:29:20.316 Namespace Management: Not Supported 00:29:20.316 Device Self-Test: Not Supported 00:29:20.316 Directives: Not Supported 00:29:20.316 NVMe-MI: Not Supported 00:29:20.316 Virtualization Management: Not Supported 00:29:20.316 Doorbell Buffer Config: Not Supported 00:29:20.316 Get LBA Status Capability: Not Supported 00:29:20.316 Command & Feature Lockdown Capability: Not Supported 00:29:20.316 Abort Command Limit: 4 00:29:20.316 Async Event Request Limit: 4 00:29:20.316 Number of Firmware Slots: N/A 00:29:20.316 Firmware Slot 1 Read-Only: N/A 00:29:20.316 Firmware Activation Without Reset: N/A 00:29:20.316 Multiple Update Detection Support: N/A 00:29:20.316 Firmware Update Granularity: No Information Provided 00:29:20.316 Per-Namespace SMART Log: Yes 00:29:20.316 Asymmetric Namespace Access Log Page: Supported 00:29:20.316 ANA Transition Time : 10 sec 00:29:20.316 00:29:20.316 Asymmetric Namespace Access Capabilities 00:29:20.316 ANA Optimized State : Supported 00:29:20.316 ANA Non-Optimized State : Supported 00:29:20.316 ANA Inaccessible State : Supported 00:29:20.316 ANA Persistent Loss State : Supported 00:29:20.316 ANA Change State : Supported 00:29:20.316 ANAGRPID is not changed : No 00:29:20.316 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:20.316 00:29:20.316 ANA Group Identifier Maximum : 128 00:29:20.316 Number of ANA Group Identifiers : 128 00:29:20.316 Max Number of Allowed Namespaces : 1024 00:29:20.316 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:20.316 Command Effects Log Page: Supported 00:29:20.316 Get Log Page Extended Data: Supported 00:29:20.316 Telemetry Log Pages: Not Supported 00:29:20.316 Persistent Event Log Pages: Not Supported 00:29:20.316 Supported Log Pages Log Page: May Support 00:29:20.316 Commands Supported & Effects Log Page: Not Supported 00:29:20.316 Feature Identifiers & Effects Log Page:May Support 00:29:20.316 NVMe-MI Commands & Effects Log Page: May Support 00:29:20.316 Data Area 4 for Telemetry Log: Not Supported 00:29:20.316 Error Log Page Entries Supported: 128 00:29:20.316 Keep Alive: Supported 00:29:20.316 Keep Alive Granularity: 1000 ms 00:29:20.316 00:29:20.316 NVM Command Set Attributes 00:29:20.316 ========================== 00:29:20.316 Submission Queue Entry Size 00:29:20.316 Max: 64 00:29:20.316 Min: 64 00:29:20.316 Completion Queue Entry Size 00:29:20.316 Max: 16 00:29:20.316 Min: 16 00:29:20.316 Number of Namespaces: 1024 00:29:20.316 Compare Command: Not Supported 00:29:20.316 Write Uncorrectable Command: Not Supported 00:29:20.316 Dataset Management Command: Supported 00:29:20.316 Write Zeroes Command: Supported 00:29:20.316 Set Features Save Field: Not Supported 00:29:20.316 Reservations: Not Supported 00:29:20.316 Timestamp: Not Supported 00:29:20.316 Copy: Not Supported 00:29:20.316 Volatile Write Cache: Present 00:29:20.316 Atomic Write Unit (Normal): 1 00:29:20.316 Atomic Write Unit (PFail): 1 00:29:20.316 Atomic Compare & Write Unit: 1 00:29:20.316 Fused Compare & Write: Not Supported 00:29:20.316 Scatter-Gather List 00:29:20.316 SGL Command Set: Supported 00:29:20.316 SGL Keyed: Not Supported 00:29:20.316 SGL Bit Bucket Descriptor: Not Supported 00:29:20.316 SGL Metadata Pointer: Not Supported 00:29:20.316 Oversized SGL: Not Supported 00:29:20.316 SGL Metadata Address: Not Supported 00:29:20.316 SGL Offset: Supported 00:29:20.316 Transport SGL Data Block: Not Supported 00:29:20.316 Replay Protected Memory Block: Not Supported 00:29:20.316 00:29:20.316 Firmware Slot Information 00:29:20.316 ========================= 00:29:20.316 Active slot: 0 00:29:20.316 00:29:20.316 Asymmetric Namespace Access 00:29:20.316 =========================== 00:29:20.316 Change Count : 0 00:29:20.316 Number of ANA Group Descriptors : 1 00:29:20.316 ANA Group Descriptor : 0 00:29:20.316 ANA Group ID : 1 00:29:20.316 Number of NSID Values : 1 00:29:20.316 Change Count : 0 00:29:20.316 ANA State : 1 00:29:20.316 Namespace Identifier : 1 00:29:20.316 00:29:20.316 Commands Supported and Effects 00:29:20.316 ============================== 00:29:20.316 Admin Commands 00:29:20.316 -------------- 00:29:20.316 Get Log Page (02h): Supported 00:29:20.316 Identify (06h): Supported 00:29:20.316 Abort (08h): Supported 00:29:20.316 Set Features (09h): Supported 00:29:20.316 Get Features (0Ah): Supported 00:29:20.316 Asynchronous Event Request (0Ch): Supported 00:29:20.316 Keep Alive (18h): Supported 00:29:20.316 I/O Commands 00:29:20.316 ------------ 00:29:20.316 Flush (00h): Supported 00:29:20.316 Write (01h): Supported LBA-Change 00:29:20.316 Read (02h): Supported 00:29:20.316 Write Zeroes (08h): Supported LBA-Change 00:29:20.316 Dataset Management (09h): Supported 00:29:20.316 00:29:20.316 Error Log 00:29:20.316 ========= 00:29:20.316 Entry: 0 00:29:20.316 Error Count: 0x3 00:29:20.316 Submission Queue Id: 0x0 00:29:20.316 Command Id: 0x5 00:29:20.316 Phase Bit: 0 00:29:20.316 Status Code: 0x2 00:29:20.316 Status Code Type: 0x0 00:29:20.316 Do Not Retry: 1 00:29:20.575 Error Location: 0x28 00:29:20.575 LBA: 0x0 00:29:20.575 Namespace: 0x0 00:29:20.575 Vendor Log Page: 0x0 00:29:20.575 ----------- 00:29:20.575 Entry: 1 00:29:20.575 Error Count: 0x2 00:29:20.575 Submission Queue Id: 0x0 00:29:20.575 Command Id: 0x5 00:29:20.575 Phase Bit: 0 00:29:20.575 Status Code: 0x2 00:29:20.575 Status Code Type: 0x0 00:29:20.575 Do Not Retry: 1 00:29:20.575 Error Location: 0x28 00:29:20.575 LBA: 0x0 00:29:20.575 Namespace: 0x0 00:29:20.575 Vendor Log Page: 0x0 00:29:20.575 ----------- 00:29:20.575 Entry: 2 00:29:20.575 Error Count: 0x1 00:29:20.575 Submission Queue Id: 0x0 00:29:20.575 Command Id: 0x4 00:29:20.575 Phase Bit: 0 00:29:20.575 Status Code: 0x2 00:29:20.575 Status Code Type: 0x0 00:29:20.575 Do Not Retry: 1 00:29:20.575 Error Location: 0x28 00:29:20.575 LBA: 0x0 00:29:20.575 Namespace: 0x0 00:29:20.575 Vendor Log Page: 0x0 00:29:20.575 00:29:20.575 Number of Queues 00:29:20.575 ================ 00:29:20.575 Number of I/O Submission Queues: 128 00:29:20.575 Number of I/O Completion Queues: 128 00:29:20.575 00:29:20.575 ZNS Specific Controller Data 00:29:20.575 ============================ 00:29:20.575 Zone Append Size Limit: 0 00:29:20.575 00:29:20.575 00:29:20.575 Active Namespaces 00:29:20.575 ================= 00:29:20.575 get_feature(0x05) failed 00:29:20.575 Namespace ID:1 00:29:20.575 Command Set Identifier: NVM (00h) 00:29:20.575 Deallocate: Supported 00:29:20.575 Deallocated/Unwritten Error: Not Supported 00:29:20.575 Deallocated Read Value: Unknown 00:29:20.575 Deallocate in Write Zeroes: Not Supported 00:29:20.575 Deallocated Guard Field: 0xFFFF 00:29:20.575 Flush: Supported 00:29:20.575 Reservation: Not Supported 00:29:20.575 Namespace Sharing Capabilities: Multiple Controllers 00:29:20.575 Size (in LBAs): 1953525168 (931GiB) 00:29:20.575 Capacity (in LBAs): 1953525168 (931GiB) 00:29:20.576 Utilization (in LBAs): 1953525168 (931GiB) 00:29:20.576 UUID: 41260fdf-678e-4120-853f-0d4f7d201e9e 00:29:20.576 Thin Provisioning: Not Supported 00:29:20.576 Per-NS Atomic Units: Yes 00:29:20.576 Atomic Boundary Size (Normal): 0 00:29:20.576 Atomic Boundary Size (PFail): 0 00:29:20.576 Atomic Boundary Offset: 0 00:29:20.576 NGUID/EUI64 Never Reused: No 00:29:20.576 ANA group ID: 1 00:29:20.576 Namespace Write Protected: No 00:29:20.576 Number of LBA Formats: 1 00:29:20.576 Current LBA Format: LBA Format #00 00:29:20.576 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:20.576 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.576 rmmod nvme_tcp 00:29:20.576 rmmod nvme_fabrics 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.576 03:10:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:22.512 03:10:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:23.893 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:23.893 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:23.893 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:23.893 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:23.893 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:23.893 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:23.893 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:23.893 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:23.893 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:23.893 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:23.893 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:23.893 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:23.893 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:23.893 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:23.893 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:23.893 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:24.830 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:29:24.830 00:29:24.830 real 0m8.829s 00:29:24.830 user 0m1.708s 00:29:24.830 sys 0m3.210s 00:29:24.830 03:10:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:24.830 03:10:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.830 ************************************ 00:29:24.830 END TEST nvmf_identify_kernel_target 00:29:24.830 ************************************ 00:29:24.830 03:10:15 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:24.830 03:10:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:24.830 03:10:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:24.830 03:10:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.830 ************************************ 00:29:24.830 START TEST nvmf_auth 00:29:24.831 ************************************ 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:24.831 * Looking for test storage... 00:29:24.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.831 03:10:15 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:29:25.090 03:10:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.994 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:26.995 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:26.995 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:26.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:26.995 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:26.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:29:26.995 00:29:26.995 --- 10.0.0.2 ping statistics --- 00:29:26.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.995 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:29:26.995 00:29:26.995 --- 10.0.0.1 ping statistics --- 00:29:26.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.995 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=467025 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 467025 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 467025 ']' 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:26.995 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=d5e0d1ad71f4a7a892f6b13cd76a21e4 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.rqN 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key d5e0d1ad71f4a7a892f6b13cd76a21e4 0 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 d5e0d1ad71f4a7a892f6b13cd76a21e4 0 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=d5e0d1ad71f4a7a892f6b13cd76a21e4 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:29:27.254 03:10:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:29:27.254 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.rqN 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.rqN 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.rqN 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=ca7cab8ccc524c16385c3a79acfa4750621f90f6456898aa088c3647f6f2b29d 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.mUg 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key ca7cab8ccc524c16385c3a79acfa4750621f90f6456898aa088c3647f6f2b29d 3 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 ca7cab8ccc524c16385c3a79acfa4750621f90f6456898aa088c3647f6f2b29d 3 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=ca7cab8ccc524c16385c3a79acfa4750621f90f6456898aa088c3647f6f2b29d 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.mUg 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.mUg 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.mUg 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:29:27.255 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=e1b4a71c84dc209c56302cf00d158c3959a7f6d0168234bc 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.Q3B 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key e1b4a71c84dc209c56302cf00d158c3959a7f6d0168234bc 0 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 e1b4a71c84dc209c56302cf00d158c3959a7f6d0168234bc 0 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=e1b4a71c84dc209c56302cf00d158c3959a7f6d0168234bc 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.Q3B 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.Q3B 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.Q3B 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=73f91255e5b1900f82efca55665f8606928b990f74473217 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.NVI 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 73f91255e5b1900f82efca55665f8606928b990f74473217 2 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 73f91255e5b1900f82efca55665f8606928b990f74473217 2 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=73f91255e5b1900f82efca55665f8606928b990f74473217 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.NVI 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.NVI 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.NVI 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=ca952c51a5fd9277479469140993e5f0 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.4Se 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key ca952c51a5fd9277479469140993e5f0 1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 ca952c51a5fd9277479469140993e5f0 1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=ca952c51a5fd9277479469140993e5f0 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.4Se 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.4Se 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.4Se 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=cc7bd8b63f97e2d6af05f8d09bf7a264 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.bYY 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key cc7bd8b63f97e2d6af05f8d09bf7a264 1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 cc7bd8b63f97e2d6af05f8d09bf7a264 1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=cc7bd8b63f97e2d6af05f8d09bf7a264 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.bYY 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.bYY 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.bYY 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=aa949a445c07582e10b8801a0af1c1ed9edf60c0d5f09361 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.qov 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key aa949a445c07582e10b8801a0af1c1ed9edf60c0d5f09361 2 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 aa949a445c07582e10b8801a0af1c1ed9edf60c0d5f09361 2 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=aa949a445c07582e10b8801a0af1c1ed9edf60c0d5f09361 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.qov 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.qov 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.qov 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=2350c368cae2bee781e4882143f5dc76 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.yl7 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 2350c368cae2bee781e4882143f5dc76 0 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 2350c368cae2bee781e4882143f5dc76 0 00:29:27.514 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.515 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:27.515 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=2350c368cae2bee781e4882143f5dc76 00:29:27.515 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:29:27.515 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.yl7 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.yl7 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.yl7 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=ca0a579d8b0d4254db8819add3da6cd505aca107406d64419b8dc3694ab3a3b8 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.gQe 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key ca0a579d8b0d4254db8819add3da6cd505aca107406d64419b8dc3694ab3a3b8 3 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 ca0a579d8b0d4254db8819add3da6cd505aca107406d64419b8dc3694ab3a3b8 3 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=ca0a579d8b0d4254db8819add3da6cd505aca107406d64419b8dc3694ab3a3b8 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.gQe 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.gQe 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.gQe 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 467025 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 467025 ']' 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:27.773 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rqN 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.mUg ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mUg 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Q3B 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.NVI ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NVI 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4Se 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.bYY ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bYY 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qov 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.yl7 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yl7 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gQe 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:28.032 03:10:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:29.408 Waiting for block devices as requested 00:29:29.408 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:29.408 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:29.408 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:29.408 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:29.408 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:29.666 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:29.666 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:29.666 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:29.666 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:29.925 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:29.925 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:29.925 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:30.183 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:30.183 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:30.183 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:30.183 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:30.441 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:30.699 No valid GPT data, bailing 00:29:30.699 03:10:21 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:30.958 03:10:21 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:29:30.958 03:10:21 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:29:30.958 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:30.958 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:30.958 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:30.959 00:29:30.959 Discovery Log Number of Records 2, Generation counter 2 00:29:30.959 =====Discovery Log Entry 0====== 00:29:30.959 trtype: tcp 00:29:30.959 adrfam: ipv4 00:29:30.959 subtype: current discovery subsystem 00:29:30.959 treq: not specified, sq flow control disable supported 00:29:30.959 portid: 1 00:29:30.959 trsvcid: 4420 00:29:30.959 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:30.959 traddr: 10.0.0.1 00:29:30.959 eflags: none 00:29:30.959 sectype: none 00:29:30.959 =====Discovery Log Entry 1====== 00:29:30.959 trtype: tcp 00:29:30.959 adrfam: ipv4 00:29:30.959 subtype: nvme subsystem 00:29:30.959 treq: not specified, sq flow control disable supported 00:29:30.959 portid: 1 00:29:30.959 trsvcid: 4420 00:29:30.959 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:30.959 traddr: 10.0.0.1 00:29:30.959 eflags: none 00:29:30.959 sectype: none 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:30.959 nvme0n1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.959 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.219 nvme0n1 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.219 03:10:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.478 nvme0n1 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.478 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.478 nvme0n1 00:29:31.479 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.479 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.479 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.479 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:31.479 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.479 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.738 nvme0n1 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.738 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.997 nvme0n1 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.997 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.255 nvme0n1 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.255 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.256 03:10:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.514 nvme0n1 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.514 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.515 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.773 nvme0n1 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.773 nvme0n1 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:32.773 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.032 nvme0n1 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.032 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.033 03:10:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.290 nvme0n1 00:29:33.290 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.290 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.290 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:33.290 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.290 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.290 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.548 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.805 nvme0n1 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.805 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.062 nvme0n1 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:34.062 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.063 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.320 nvme0n1 00:29:34.320 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.320 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.320 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.320 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.320 03:10:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:34.320 03:10:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.320 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.320 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.320 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.320 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.321 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.578 nvme0n1 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:34.578 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.579 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:35.145 nvme0n1 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.145 03:10:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:35.724 nvme0n1 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.724 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:36.341 nvme0n1 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.341 03:10:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:36.911 nvme0n1 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:36.911 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.912 03:10:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:37.482 nvme0n1 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.482 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:38.422 nvme0n1 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.422 03:10:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:39.359 nvme0n1 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:39.359 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.360 03:10:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:40.296 nvme0n1 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.296 03:10:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.296 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:41.236 nvme0n1 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.236 03:10:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.175 nvme0n1 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.175 03:10:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.434 nvme0n1 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.434 nvme0n1 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:42.434 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.694 nvme0n1 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.694 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.695 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.954 nvme0n1 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.954 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.221 nvme0n1 00:29:43.221 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.221 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.221 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:43.221 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.222 03:10:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.222 nvme0n1 00:29:43.222 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.222 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.222 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:43.222 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.222 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.485 nvme0n1 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.485 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 nvme0n1 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.745 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.004 nvme0n1 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.004 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.263 nvme0n1 00:29:44.263 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.264 03:10:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.524 nvme0n1 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.524 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.782 nvme0n1 00:29:44.782 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.782 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.782 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.782 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:44.782 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:44.782 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.042 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.301 nvme0n1 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.301 03:10:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.561 nvme0n1 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.561 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.821 nvme0n1 00:29:45.821 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.821 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.821 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:45.821 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.821 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.821 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.821 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.821 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.822 03:10:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:46.389 nvme0n1 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:46.389 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.390 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:46.958 nvme0n1 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:46.958 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.959 03:10:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:47.528 nvme0n1 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.528 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:48.097 nvme0n1 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.097 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.098 03:10:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:48.665 nvme0n1 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.665 03:10:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:49.635 nvme0n1 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.635 03:10:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:50.573 nvme0n1 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.573 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.833 03:10:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:51.773 nvme0n1 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.773 03:10:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:52.712 nvme0n1 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.712 03:10:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.647 nvme0n1 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.647 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.648 nvme0n1 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.648 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.906 nvme0n1 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:53.906 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.907 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.167 nvme0n1 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.167 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.426 nvme0n1 00:29:54.426 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.426 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.426 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.426 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.426 03:10:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:54.426 03:10:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.426 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.426 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.426 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.426 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.426 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.426 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:54.426 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:54.426 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.427 nvme0n1 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.427 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.687 nvme0n1 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:54.687 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.688 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.948 nvme0n1 00:29:54.948 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.948 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.948 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.948 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.948 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.949 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.210 nvme0n1 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.210 03:10:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.469 nvme0n1 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:55.469 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.470 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.729 nvme0n1 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.729 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.991 nvme0n1 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.991 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.251 nvme0n1 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:56.251 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.252 03:10:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.513 nvme0n1 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.513 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.772 nvme0n1 00:29:56.772 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.772 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.772 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.772 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:56.772 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:56.772 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:57.031 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.032 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.291 nvme0n1 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.291 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.292 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:57.292 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.292 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:57.292 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:57.292 03:10:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:57.292 03:10:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:57.292 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.292 03:10:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.858 nvme0n1 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.858 03:10:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:58.429 nvme0n1 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.429 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:58.999 nvme0n1 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.999 03:10:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:59.570 nvme0n1 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.570 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:00.139 nvme0n1 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:30:00.139 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZDVlMGQxYWQ3MWY0YTdhODkyZjZiMTNjZDc2YTIxZTRZFjZ/: 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: ]] 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:Y2E3Y2FiOGNjYzUyNGMxNjM4NWMzYTc5YWNmYTQ3NTA2MjFmOTBmNjQ1Njg5OGFhMDg4YzM2NDdmNmYyYjI5ZNO3kRs=: 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.140 03:10:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:01.078 nvme0n1 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:01.078 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.079 03:10:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:02.014 nvme0n1 00:30:02.014 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.014 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.014 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:30:02.014 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Y2E5NTJjNTFhNWZkOTI3NzQ3OTQ2OTE0MDk5M2U1ZjA/o/es: 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: ]] 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:Y2M3YmQ4YjYzZjk3ZTJkNmFmMDVmOGQwOWJmN2EyNjQEPAL6: 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.015 03:10:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:02.958 nvme0n1 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YWE5NDlhNDQ1YzA3NTgyZTEwYjg4MDFhMGFmMWMxZWQ5ZWRmNjBjMGQ1ZjA5MzYxn4RUqQ==: 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: ]] 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:MjM1MGMzNjhjYWUyYmVlNzgxZTQ4ODIxNDNmNWRjNzY7eDnC: 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.958 03:10:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:03.938 nvme0n1 00:30:03.938 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.938 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.938 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.938 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:30:03.938 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:03.938 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.938 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.938 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.938 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:Y2EwYTU3OWQ4YjBkNDI1NGRiODgxOWFkZDNkYTZjZDUwNWFjYTEwNzQwNmQ2NDQxOWI4ZGMzNjk0YWIzYTNiOESKwX4=: 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.939 03:10:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:04.876 nvme0n1 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFiNGE3MWM4NGRjMjA5YzU2MzAyY2YwMGQxNThjMzk1OWE3ZjZkMDE2ODIzNGJjhAR5gA==: 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: ]] 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NzNmOTEyNTVlNWIxOTAwZjgyZWZjYTU1NjY1Zjg2MDY5MjhiOTkwZjc0NDczMjE3s5oSoQ==: 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:04.876 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:05.135 request: 00:30:05.135 { 00:30:05.135 "name": "nvme0", 00:30:05.135 "trtype": "tcp", 00:30:05.135 "traddr": "10.0.0.1", 00:30:05.135 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:05.135 "adrfam": "ipv4", 00:30:05.135 "trsvcid": "4420", 00:30:05.135 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:05.135 "method": "bdev_nvme_attach_controller", 00:30:05.135 "req_id": 1 00:30:05.135 } 00:30:05.135 Got JSON-RPC error response 00:30:05.135 response: 00:30:05.135 { 00:30:05.135 "code": -32602, 00:30:05.135 "message": "Invalid parameters" 00:30:05.135 } 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:05.135 request: 00:30:05.135 { 00:30:05.135 "name": "nvme0", 00:30:05.135 "trtype": "tcp", 00:30:05.135 "traddr": "10.0.0.1", 00:30:05.135 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:05.135 "adrfam": "ipv4", 00:30:05.135 "trsvcid": "4420", 00:30:05.135 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:05.135 "dhchap_key": "key2", 00:30:05.135 "method": "bdev_nvme_attach_controller", 00:30:05.135 "req_id": 1 00:30:05.135 } 00:30:05.135 Got JSON-RPC error response 00:30:05.135 response: 00:30:05.135 { 00:30:05.135 "code": -32602, 00:30:05.135 "message": "Invalid parameters" 00:30:05.135 } 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:30:05.135 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:05.136 request: 00:30:05.136 { 00:30:05.136 "name": "nvme0", 00:30:05.136 "trtype": "tcp", 00:30:05.136 "traddr": "10.0.0.1", 00:30:05.136 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:05.136 "adrfam": "ipv4", 00:30:05.136 "trsvcid": "4420", 00:30:05.136 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:05.136 "dhchap_key": "key1", 00:30:05.136 "dhchap_ctrlr_key": "ckey2", 00:30:05.136 "method": "bdev_nvme_attach_controller", 00:30:05.136 "req_id": 1 00:30:05.136 } 00:30:05.136 Got JSON-RPC error response 00:30:05.136 response: 00:30:05.136 { 00:30:05.136 "code": -32602, 00:30:05.136 "message": "Invalid parameters" 00:30:05.136 } 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:05.136 rmmod nvme_tcp 00:30:05.136 rmmod nvme_fabrics 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 467025 ']' 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 467025 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 467025 ']' 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 467025 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:05.136 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 467025 00:30:05.395 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:05.395 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:05.395 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 467025' 00:30:05.395 killing process with pid 467025 00:30:05.395 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 467025 00:30:05.395 03:10:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 467025 00:30:05.395 03:10:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:05.395 03:10:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:05.395 03:10:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:05.395 03:10:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:05.395 03:10:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:05.395 03:10:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.395 03:10:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.395 03:10:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.930 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:07.930 03:10:58 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:07.930 03:10:58 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:07.930 03:10:58 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:30:07.930 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:07.930 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:30:07.930 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:07.930 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:07.930 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:07.931 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:07.931 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:07.931 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:07.931 03:10:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:08.497 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:08.497 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:08.497 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:08.497 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:08.497 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:08.497 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:08.497 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:08.757 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:08.757 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:08.757 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:08.757 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:08.757 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:08.757 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:08.757 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:08.757 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:08.757 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:09.699 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:30:09.699 03:11:00 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rqN /tmp/spdk.key-null.Q3B /tmp/spdk.key-sha256.4Se /tmp/spdk.key-sha384.qov /tmp/spdk.key-sha512.gQe /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:09.699 03:11:00 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:10.637 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:10.637 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:10.896 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:10.896 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:10.896 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:10.896 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:10.896 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:10.896 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:10.896 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:10.896 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:10.896 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:10.896 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:10.896 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:10.896 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:10.896 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:10.896 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:10.896 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:10.896 00:30:10.896 real 0m46.049s 00:30:10.896 user 0m43.882s 00:30:10.896 sys 0m5.532s 00:30:10.896 03:11:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:10.896 03:11:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:30:10.896 ************************************ 00:30:10.896 END TEST nvmf_auth 00:30:10.896 ************************************ 00:30:10.896 03:11:01 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:30:10.896 03:11:01 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:10.896 03:11:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:10.896 03:11:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:10.896 03:11:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:10.896 ************************************ 00:30:10.896 START TEST nvmf_digest 00:30:10.896 ************************************ 00:30:10.896 03:11:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:11.155 * Looking for test storage... 00:30:11.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.155 03:11:01 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.155 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:11.155 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:30:11.156 03:11:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:13.063 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.063 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:30:13.063 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:13.063 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:13.063 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:13.063 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:13.063 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:13.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:13.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:13.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:13.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.064 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.323 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:13.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:30:13.323 00:30:13.323 --- 10.0.0.2 ping statistics --- 00:30:13.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.323 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:30:13.323 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:30:13.323 00:30:13.323 --- 10.0.0.1 ping statistics --- 00:30:13.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.323 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:13.324 ************************************ 00:30:13.324 START TEST nvmf_digest_clean 00:30:13.324 ************************************ 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=476177 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 476177 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 476177 ']' 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:13.324 03:11:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:13.324 [2024-05-13 03:11:03.982800] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:13.324 [2024-05-13 03:11:03.982874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.324 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.324 [2024-05-13 03:11:04.021325] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:13.324 [2024-05-13 03:11:04.047669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.582 [2024-05-13 03:11:04.142448] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.582 [2024-05-13 03:11:04.142511] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.582 [2024-05-13 03:11:04.142523] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.582 [2024-05-13 03:11:04.142534] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.582 [2024-05-13 03:11:04.142543] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.582 [2024-05-13 03:11:04.142576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:13.582 null0 00:30:13.582 [2024-05-13 03:11:04.334479] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.582 [2024-05-13 03:11:04.358467] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:13.582 [2024-05-13 03:11:04.358722] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=476200 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 476200 /var/tmp/bperf.sock 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 476200 ']' 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:13.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:13.582 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:13.840 [2024-05-13 03:11:04.406207] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:13.840 [2024-05-13 03:11:04.406292] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476200 ] 00:30:13.840 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.840 [2024-05-13 03:11:04.438035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:13.840 [2024-05-13 03:11:04.466140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.840 [2024-05-13 03:11:04.554894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.840 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:13.840 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:13.840 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:13.840 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:13.840 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:14.407 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:14.407 03:11:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:14.665 nvme0n1 00:30:14.665 03:11:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:14.665 03:11:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:14.665 Running I/O for 2 seconds... 00:30:17.196 00:30:17.196 Latency(us) 00:30:17.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.196 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:17.196 nvme0n1 : 2.01 19194.57 74.98 0.00 0.00 6658.54 3131.16 19320.98 00:30:17.196 =================================================================================================================== 00:30:17.196 Total : 19194.57 74.98 0.00 0.00 6658.54 3131.16 19320.98 00:30:17.196 0 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:17.196 | select(.opcode=="crc32c") 00:30:17.196 | "\(.module_name) \(.executed)"' 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 476200 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 476200 ']' 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 476200 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 476200 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 476200' 00:30:17.196 killing process with pid 476200 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 476200 00:30:17.196 Received shutdown signal, test time was about 2.000000 seconds 00:30:17.196 00:30:17.196 Latency(us) 00:30:17.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.196 =================================================================================================================== 00:30:17.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:17.196 03:11:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 476200 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=476606 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 476606 /var/tmp/bperf.sock 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 476606 ']' 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:17.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:17.455 [2024-05-13 03:11:08.047771] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:17.455 [2024-05-13 03:11:08.047868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476606 ] 00:30:17.455 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:17.455 Zero copy mechanism will not be used. 00:30:17.455 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.455 [2024-05-13 03:11:08.079053] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:17.455 [2024-05-13 03:11:08.110416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.455 [2024-05-13 03:11:08.198482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:17.455 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:18.022 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:18.022 03:11:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:18.281 nvme0n1 00:30:18.281 03:11:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:18.281 03:11:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:18.541 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:18.541 Zero copy mechanism will not be used. 00:30:18.541 Running I/O for 2 seconds... 00:30:20.489 00:30:20.489 Latency(us) 00:30:20.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.489 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:20.489 nvme0n1 : 2.00 1797.92 224.74 0.00 0.00 8895.55 7573.05 19029.71 00:30:20.489 =================================================================================================================== 00:30:20.489 Total : 1797.92 224.74 0.00 0.00 8895.55 7573.05 19029.71 00:30:20.489 0 00:30:20.489 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:20.489 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:20.489 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:20.489 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:20.489 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:20.489 | select(.opcode=="crc32c") 00:30:20.489 | "\(.module_name) \(.executed)"' 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 476606 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 476606 ']' 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 476606 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 476606 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 476606' 00:30:20.747 killing process with pid 476606 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 476606 00:30:20.747 Received shutdown signal, test time was about 2.000000 seconds 00:30:20.747 00:30:20.747 Latency(us) 00:30:20.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.747 =================================================================================================================== 00:30:20.747 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.747 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 476606 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=477012 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 477012 /var/tmp/bperf.sock 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 477012 ']' 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:21.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:21.006 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:21.006 [2024-05-13 03:11:11.676213] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:21.006 [2024-05-13 03:11:11.676313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477012 ] 00:30:21.006 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.006 [2024-05-13 03:11:11.710477] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:21.006 [2024-05-13 03:11:11.742847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.265 [2024-05-13 03:11:11.837675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.265 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:21.265 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:21.265 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:21.265 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:21.265 03:11:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:21.523 03:11:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.523 03:11:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:22.088 nvme0n1 00:30:22.088 03:11:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:22.088 03:11:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:22.088 Running I/O for 2 seconds... 00:30:23.986 00:30:23.986 Latency(us) 00:30:23.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.986 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:23.986 nvme0n1 : 2.01 18884.15 73.77 0.00 0.00 6762.12 5849.69 15534.46 00:30:23.986 =================================================================================================================== 00:30:23.986 Total : 18884.15 73.77 0.00 0.00 6762.12 5849.69 15534.46 00:30:23.986 0 00:30:24.243 03:11:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:24.243 03:11:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:24.243 03:11:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:24.243 03:11:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:24.243 03:11:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:24.243 | select(.opcode=="crc32c") 00:30:24.243 | "\(.module_name) \(.executed)"' 00:30:24.243 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:24.243 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:24.243 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:24.243 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:24.243 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 477012 00:30:24.243 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 477012 ']' 00:30:24.243 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 477012 00:30:24.243 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 477012 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 477012' 00:30:24.501 killing process with pid 477012 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 477012 00:30:24.501 Received shutdown signal, test time was about 2.000000 seconds 00:30:24.501 00:30:24.501 Latency(us) 00:30:24.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.501 =================================================================================================================== 00:30:24.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 477012 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=477540 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 477540 /var/tmp/bperf.sock 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 477540 ']' 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:24.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:24.501 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:24.760 [2024-05-13 03:11:15.318195] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:24.760 [2024-05-13 03:11:15.318294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477540 ] 00:30:24.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:24.760 Zero copy mechanism will not be used. 00:30:24.760 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.760 [2024-05-13 03:11:15.353457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:24.760 [2024-05-13 03:11:15.381232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.760 [2024-05-13 03:11:15.466891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.760 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:24.760 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:24.760 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:24.760 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:24.760 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:25.326 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.326 03:11:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.584 nvme0n1 00:30:25.584 03:11:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:25.584 03:11:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:25.841 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:25.841 Zero copy mechanism will not be used. 00:30:25.841 Running I/O for 2 seconds... 00:30:27.741 00:30:27.741 Latency(us) 00:30:27.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.741 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:27.741 nvme0n1 : 2.01 1108.28 138.53 0.00 0.00 14382.94 10437.21 22233.69 00:30:27.741 =================================================================================================================== 00:30:27.741 Total : 1108.28 138.53 0.00 0.00 14382.94 10437.21 22233.69 00:30:27.741 0 00:30:27.741 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:27.741 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:27.741 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:27.741 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:27.741 | select(.opcode=="crc32c") 00:30:27.741 | "\(.module_name) \(.executed)"' 00:30:27.741 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:27.999 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:27.999 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:27.999 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:27.999 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:27.999 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 477540 00:30:27.999 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 477540 ']' 00:30:27.999 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 477540 00:30:27.999 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:28.000 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:28.000 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 477540 00:30:28.000 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:28.000 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:28.000 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 477540' 00:30:28.000 killing process with pid 477540 00:30:28.000 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 477540 00:30:28.000 Received shutdown signal, test time was about 2.000000 seconds 00:30:28.000 00:30:28.000 Latency(us) 00:30:28.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.000 =================================================================================================================== 00:30:28.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.000 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 477540 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 476177 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 476177 ']' 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 476177 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 476177 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 476177' 00:30:28.258 killing process with pid 476177 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 476177 00:30:28.258 [2024-05-13 03:11:18.993525] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:28.258 03:11:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 476177 00:30:28.516 00:30:28.516 real 0m15.275s 00:30:28.516 user 0m31.032s 00:30:28.516 sys 0m3.636s 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:28.516 ************************************ 00:30:28.516 END TEST nvmf_digest_clean 00:30:28.516 ************************************ 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:28.516 ************************************ 00:30:28.516 START TEST nvmf_digest_error 00:30:28.516 ************************************ 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=477978 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 477978 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 477978 ']' 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:28.516 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.516 [2024-05-13 03:11:19.300928] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:28.516 [2024-05-13 03:11:19.301032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.775 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.775 [2024-05-13 03:11:19.338463] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:28.775 [2024-05-13 03:11:19.364964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.775 [2024-05-13 03:11:19.448674] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.775 [2024-05-13 03:11:19.448748] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.775 [2024-05-13 03:11:19.448771] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.775 [2024-05-13 03:11:19.448783] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.775 [2024-05-13 03:11:19.448793] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.775 [2024-05-13 03:11:19.448819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.775 [2024-05-13 03:11:19.533405] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.775 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:29.034 null0 00:30:29.034 [2024-05-13 03:11:19.644560] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.034 [2024-05-13 03:11:19.668534] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:29.034 [2024-05-13 03:11:19.668827] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=478049 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 478049 /var/tmp/bperf.sock 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 478049 ']' 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:29.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:29.034 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:29.034 [2024-05-13 03:11:19.716595] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:29.034 [2024-05-13 03:11:19.716668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478049 ] 00:30:29.034 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.034 [2024-05-13 03:11:19.750123] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:29.034 [2024-05-13 03:11:19.776845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.292 [2024-05-13 03:11:19.865789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.292 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:29.292 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:29.292 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:29.292 03:11:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:29.550 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:29.550 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.550 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:29.550 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.550 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.550 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:30.116 nvme0n1 00:30:30.116 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:30.116 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.116 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:30.116 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.116 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:30.116 03:11:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:30.116 Running I/O for 2 seconds... 00:30:30.116 [2024-05-13 03:11:20.798577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.116 [2024-05-13 03:11:20.798631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.116 [2024-05-13 03:11:20.798655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.116 [2024-05-13 03:11:20.813901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.116 [2024-05-13 03:11:20.813934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.116 [2024-05-13 03:11:20.813950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.116 [2024-05-13 03:11:20.825356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.116 [2024-05-13 03:11:20.825392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.116 [2024-05-13 03:11:20.825412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.116 [2024-05-13 03:11:20.840085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.116 [2024-05-13 03:11:20.840119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.116 [2024-05-13 03:11:20.840138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.116 [2024-05-13 03:11:20.853249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.116 [2024-05-13 03:11:20.853285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.117 [2024-05-13 03:11:20.853319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.117 [2024-05-13 03:11:20.866085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.117 [2024-05-13 03:11:20.866118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.117 [2024-05-13 03:11:20.866135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.117 [2024-05-13 03:11:20.878502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.117 [2024-05-13 03:11:20.878533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.117 [2024-05-13 03:11:20.878549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.117 [2024-05-13 03:11:20.892326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.117 [2024-05-13 03:11:20.892360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.117 [2024-05-13 03:11:20.892380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.117 [2024-05-13 03:11:20.905352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.117 [2024-05-13 03:11:20.905385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.117 [2024-05-13 03:11:20.905404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.117 [2024-05-13 03:11:20.918321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.117 [2024-05-13 03:11:20.918356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.117 [2024-05-13 03:11:20.918376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.375 [2024-05-13 03:11:20.931013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.375 [2024-05-13 03:11:20.931064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.375 [2024-05-13 03:11:20.931083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.375 [2024-05-13 03:11:20.944245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.375 [2024-05-13 03:11:20.944280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.375 [2024-05-13 03:11:20.944300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.375 [2024-05-13 03:11:20.957643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.375 [2024-05-13 03:11:20.957676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.375 [2024-05-13 03:11:20.957710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.375 [2024-05-13 03:11:20.971803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.375 [2024-05-13 03:11:20.971834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.375 [2024-05-13 03:11:20.971851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.375 [2024-05-13 03:11:20.984160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.375 [2024-05-13 03:11:20.984194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.375 [2024-05-13 03:11:20.984213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.375 [2024-05-13 03:11:20.998314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.375 [2024-05-13 03:11:20.998348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.375 [2024-05-13 03:11:20.998367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.010193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.010227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.010246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.024913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.024945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.024962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.038504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.038539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.038558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.051301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.051336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.051355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.064168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.064204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.064224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.077311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.077352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.077372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.090081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.090115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.090135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.103963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.104011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.104031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.117782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.117809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.117829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.130182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.130215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.130245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.144295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.144329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.144349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.156569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.156602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.156621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.376 [2024-05-13 03:11:21.170390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.376 [2024-05-13 03:11:21.170424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.376 [2024-05-13 03:11:21.170443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.185336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.185371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.185390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.196994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.197040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.197059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.210638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.210671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.210706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.223877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.223906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.223923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.238141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.238175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.238195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.251219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.251252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.251272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.263920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.263949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.263966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.278528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.278561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.278583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.291612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.291645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.291665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.304108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.304142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.304170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.317874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.317904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.317924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.331693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.331749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.331767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.345562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.345595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.345613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.357878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.357908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.357927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.372144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.372178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.372197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.383586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.383619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.383639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.397256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.397289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.397307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.410793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.410822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.410841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.636 [2024-05-13 03:11:21.423947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.636 [2024-05-13 03:11:21.423983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.636 [2024-05-13 03:11:21.424025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.637 [2024-05-13 03:11:21.437331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.637 [2024-05-13 03:11:21.437367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.637 [2024-05-13 03:11:21.437385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.451769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.451800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.451832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.464025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.464059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.464078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.478922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.478953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.478972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.490833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.490862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.490879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.504653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.504686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.504718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.519204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.519238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.519257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.532917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.532947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.532967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.545794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.545824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.545844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.558977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.559023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.559042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.571616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.571649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.571668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.584927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.584958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.584977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.597816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.597846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.597867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.611833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.611863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.611883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.624692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.624747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.624765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.638316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.638350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.638375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.652039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.652072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.652099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.664934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.664964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.895 [2024-05-13 03:11:21.664981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.895 [2024-05-13 03:11:21.677762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.895 [2024-05-13 03:11:21.677791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.896 [2024-05-13 03:11:21.677811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.896 [2024-05-13 03:11:21.692385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:30.896 [2024-05-13 03:11:21.692419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.896 [2024-05-13 03:11:21.692438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.705876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.705905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.705922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.718656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.718689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.718744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.732203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.732236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.732256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.745243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.745278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.745297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.759002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.759036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.759056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.771709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.771756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.771783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.784479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.784513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.784532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.798834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.798864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.798882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.810665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.810708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.810753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.825528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.825563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.825583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.838184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.838218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.154 [2024-05-13 03:11:21.838237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.154 [2024-05-13 03:11:21.851928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.154 [2024-05-13 03:11:21.851958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.155 [2024-05-13 03:11:21.851976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.155 [2024-05-13 03:11:21.866461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.155 [2024-05-13 03:11:21.866496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.155 [2024-05-13 03:11:21.866517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.155 [2024-05-13 03:11:21.878312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.155 [2024-05-13 03:11:21.878356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.155 [2024-05-13 03:11:21.878381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.155 [2024-05-13 03:11:21.891766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.155 [2024-05-13 03:11:21.891797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.155 [2024-05-13 03:11:21.891815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.155 [2024-05-13 03:11:21.906264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.155 [2024-05-13 03:11:21.906300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.155 [2024-05-13 03:11:21.906319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.155 [2024-05-13 03:11:21.920328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.155 [2024-05-13 03:11:21.920362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.155 [2024-05-13 03:11:21.920381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.155 [2024-05-13 03:11:21.933553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.155 [2024-05-13 03:11:21.933586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.155 [2024-05-13 03:11:21.933605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.155 [2024-05-13 03:11:21.945614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.155 [2024-05-13 03:11:21.945646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.155 [2024-05-13 03:11:21.945663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:21.959116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:21.959155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:21.959176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:21.973467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:21.973502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:21.973522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:21.985844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:21.985875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:21.985893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:21.998856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:21.998892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:21.998924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:22.012826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:22.012860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:22.012877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:22.026176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:22.026210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:22.026229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:22.039411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:22.039445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:22.039464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:22.052943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:22.052975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:22.052993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:22.066934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:22.066964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:22.066981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:22.079363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:22.079398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:22.079417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:22.092772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:22.092814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:22.092832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:22.106683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:22.106740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.413 [2024-05-13 03:11:22.106759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.413 [2024-05-13 03:11:22.120748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.413 [2024-05-13 03:11:22.120778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.414 [2024-05-13 03:11:22.120794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.414 [2024-05-13 03:11:22.133716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.414 [2024-05-13 03:11:22.133750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.414 [2024-05-13 03:11:22.133783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.414 [2024-05-13 03:11:22.146266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.414 [2024-05-13 03:11:22.146299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.414 [2024-05-13 03:11:22.146318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.414 [2024-05-13 03:11:22.160926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.414 [2024-05-13 03:11:22.160956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.414 [2024-05-13 03:11:22.160989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.414 [2024-05-13 03:11:22.173750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.414 [2024-05-13 03:11:22.173781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.414 [2024-05-13 03:11:22.173798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.414 [2024-05-13 03:11:22.187619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.414 [2024-05-13 03:11:22.187653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.414 [2024-05-13 03:11:22.187672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.414 [2024-05-13 03:11:22.200540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.414 [2024-05-13 03:11:22.200573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.414 [2024-05-13 03:11:22.200592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.414 [2024-05-13 03:11:22.214869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.414 [2024-05-13 03:11:22.214900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.414 [2024-05-13 03:11:22.214918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.226149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.226183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.226207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.240372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.240407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.240427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.253968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.254002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.254020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.266931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.266975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.266992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.281908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.281938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.281956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.293818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.293847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.293879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.307464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.307498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.307517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.320786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.320816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.320834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.334007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.334057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.334076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.346307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.346350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.346370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.361003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.361037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.361056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.373214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.373248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.373267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.672 [2024-05-13 03:11:22.387542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.672 [2024-05-13 03:11:22.387576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.672 [2024-05-13 03:11:22.387595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.673 [2024-05-13 03:11:22.400218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.673 [2024-05-13 03:11:22.400251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.673 [2024-05-13 03:11:22.400271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.673 [2024-05-13 03:11:22.413824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.673 [2024-05-13 03:11:22.413852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.673 [2024-05-13 03:11:22.413868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.673 [2024-05-13 03:11:22.426596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.673 [2024-05-13 03:11:22.426629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.673 [2024-05-13 03:11:22.426648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.673 [2024-05-13 03:11:22.440575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.673 [2024-05-13 03:11:22.440608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.673 [2024-05-13 03:11:22.440628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.673 [2024-05-13 03:11:22.453290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.673 [2024-05-13 03:11:22.453324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.673 [2024-05-13 03:11:22.453343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.673 [2024-05-13 03:11:22.467836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.673 [2024-05-13 03:11:22.467867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.673 [2024-05-13 03:11:22.467884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.930 [2024-05-13 03:11:22.480215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.930 [2024-05-13 03:11:22.480250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.930 [2024-05-13 03:11:22.480269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.930 [2024-05-13 03:11:22.494430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.930 [2024-05-13 03:11:22.494464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.930 [2024-05-13 03:11:22.494483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.930 [2024-05-13 03:11:22.508972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.930 [2024-05-13 03:11:22.509020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.930 [2024-05-13 03:11:22.509039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.930 [2024-05-13 03:11:22.521493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.930 [2024-05-13 03:11:22.521527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.930 [2024-05-13 03:11:22.521546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.930 [2024-05-13 03:11:22.535074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.930 [2024-05-13 03:11:22.535108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.930 [2024-05-13 03:11:22.535127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.930 [2024-05-13 03:11:22.547388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.930 [2024-05-13 03:11:22.547422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.930 [2024-05-13 03:11:22.547441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.930 [2024-05-13 03:11:22.561720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.561766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.561783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.574325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.574359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.574384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.588253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.588288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.588308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.600638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.600671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.600689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.614890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.614920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.614938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.627733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.627779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.627796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.640811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.640841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.640859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.655712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.655760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.655777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.668291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.668325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.668344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.681378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.681412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.681431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.694477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.694510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.694529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.707553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.707587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.707606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.931 [2024-05-13 03:11:22.722559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:31.931 [2024-05-13 03:11:22.722592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.931 [2024-05-13 03:11:22.722612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.189 [2024-05-13 03:11:22.734773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:32.189 [2024-05-13 03:11:22.734804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.189 [2024-05-13 03:11:22.734821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.189 [2024-05-13 03:11:22.747826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:32.189 [2024-05-13 03:11:22.747857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.189 [2024-05-13 03:11:22.747874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.189 [2024-05-13 03:11:22.760808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:32.189 [2024-05-13 03:11:22.760839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.189 [2024-05-13 03:11:22.760856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.189 [2024-05-13 03:11:22.774104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4e4a0) 00:30:32.189 [2024-05-13 03:11:22.774138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.189 [2024-05-13 03:11:22.774157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.189 00:30:32.189 Latency(us) 00:30:32.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.189 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:32.189 nvme0n1 : 2.01 19022.45 74.31 0.00 0.00 6717.91 3155.44 20097.71 00:30:32.189 =================================================================================================================== 00:30:32.189 Total : 19022.45 74.31 0.00 0.00 6717.91 3155.44 20097.71 00:30:32.189 0 00:30:32.189 03:11:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:32.189 03:11:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:32.189 03:11:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:32.189 | .driver_specific 00:30:32.189 | .nvme_error 00:30:32.189 | .status_code 00:30:32.189 | .command_transient_transport_error' 00:30:32.189 03:11:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 478049 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 478049 ']' 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 478049 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478049 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478049' 00:30:32.447 killing process with pid 478049 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 478049 00:30:32.447 Received shutdown signal, test time was about 2.000000 seconds 00:30:32.447 00:30:32.447 Latency(us) 00:30:32.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.447 =================================================================================================================== 00:30:32.447 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.447 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 478049 00:30:32.705 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:32.705 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:32.705 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:32.705 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:32.705 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:32.705 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=478524 00:30:32.706 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:32.706 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 478524 /var/tmp/bperf.sock 00:30:32.706 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 478524 ']' 00:30:32.706 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:32.706 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:32.706 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:32.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:32.706 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:32.706 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:32.706 [2024-05-13 03:11:23.341440] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:32.706 [2024-05-13 03:11:23.341520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478524 ] 00:30:32.706 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:32.706 Zero copy mechanism will not be used. 00:30:32.706 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.706 [2024-05-13 03:11:23.374289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:32.706 [2024-05-13 03:11:23.404881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.706 [2024-05-13 03:11:23.502144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.964 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:32.964 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:32.964 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:32.964 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:33.220 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:33.220 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.220 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:33.220 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.220 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:33.220 03:11:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:33.543 nvme0n1 00:30:33.805 03:11:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:33.805 03:11:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.805 03:11:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:33.805 03:11:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.805 03:11:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:33.805 03:11:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:33.805 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:33.805 Zero copy mechanism will not be used. 00:30:33.805 Running I/O for 2 seconds... 00:30:33.805 [2024-05-13 03:11:24.475434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:33.805 [2024-05-13 03:11:24.475492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.805 [2024-05-13 03:11:24.475517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.805 [2024-05-13 03:11:24.494096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:33.805 [2024-05-13 03:11:24.494135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.805 [2024-05-13 03:11:24.494156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.805 [2024-05-13 03:11:24.514392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:33.805 [2024-05-13 03:11:24.514437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.805 [2024-05-13 03:11:24.514458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.805 [2024-05-13 03:11:24.531980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:33.805 [2024-05-13 03:11:24.532025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.805 [2024-05-13 03:11:24.532042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.805 [2024-05-13 03:11:24.550302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:33.805 [2024-05-13 03:11:24.550339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.805 [2024-05-13 03:11:24.550359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.805 [2024-05-13 03:11:24.567280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:33.805 [2024-05-13 03:11:24.567317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.805 [2024-05-13 03:11:24.567337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.805 [2024-05-13 03:11:24.584899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:33.805 [2024-05-13 03:11:24.584928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.805 [2024-05-13 03:11:24.584944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.805 [2024-05-13 03:11:24.602398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:33.805 [2024-05-13 03:11:24.602434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.805 [2024-05-13 03:11:24.602455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.620403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.620439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.620459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.637507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.637543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.637563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.654673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.654717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.654739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.671752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.671783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.671799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.688894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.688925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.688942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.705957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.705987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.706003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.723059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.723094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.723115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.740171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.740206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.740226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.757314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.757350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.757369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.774375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.774410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.774429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.791481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.791516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.791536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.808309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.808350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.808371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.825085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.825121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.825141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.841921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.841951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.841967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.064 [2024-05-13 03:11:24.858626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.064 [2024-05-13 03:11:24.858662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.064 [2024-05-13 03:11:24.858681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:24.876125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:24.876161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:24.876181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:24.892838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:24.892867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:24.892884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:24.909500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:24.909535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:24.909555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:24.926164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:24.926199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:24.926219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:24.942782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:24.942810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:24.942826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:24.959523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:24.959558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:24.959577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:24.976433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:24.976468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:24.976487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:24.992917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:24.992948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:24.992964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:25.009161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:25.009198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:25.009219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:25.025485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:25.025520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:25.025540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:25.041551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:25.041586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:25.041606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:25.057649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:25.057684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:25.057712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:25.073839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:25.073868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:25.073884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:25.090262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:25.090297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:25.090322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:25.106822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:25.106851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:25.106867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.322 [2024-05-13 03:11:25.123411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.322 [2024-05-13 03:11:25.123447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.322 [2024-05-13 03:11:25.123481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.580 [2024-05-13 03:11:25.139423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.580 [2024-05-13 03:11:25.139458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.580 [2024-05-13 03:11:25.139478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.580 [2024-05-13 03:11:25.155041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.580 [2024-05-13 03:11:25.155076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.580 [2024-05-13 03:11:25.155096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.580 [2024-05-13 03:11:25.170642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.580 [2024-05-13 03:11:25.170677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.580 [2024-05-13 03:11:25.170704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.580 [2024-05-13 03:11:25.186385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.580 [2024-05-13 03:11:25.186420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.580 [2024-05-13 03:11:25.186440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.580 [2024-05-13 03:11:25.202041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.580 [2024-05-13 03:11:25.202076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.580 [2024-05-13 03:11:25.202096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.580 [2024-05-13 03:11:25.217911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.580 [2024-05-13 03:11:25.217940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.580 [2024-05-13 03:11:25.217956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.580 [2024-05-13 03:11:25.234388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.580 [2024-05-13 03:11:25.234428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.580 [2024-05-13 03:11:25.234448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.580 [2024-05-13 03:11:25.250691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.581 [2024-05-13 03:11:25.250750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.581 [2024-05-13 03:11:25.250767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.581 [2024-05-13 03:11:25.267030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.581 [2024-05-13 03:11:25.267065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.581 [2024-05-13 03:11:25.267084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.581 [2024-05-13 03:11:25.283280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.581 [2024-05-13 03:11:25.283314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.581 [2024-05-13 03:11:25.283334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.581 [2024-05-13 03:11:25.299663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.581 [2024-05-13 03:11:25.299704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.581 [2024-05-13 03:11:25.299726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.581 [2024-05-13 03:11:25.316330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.581 [2024-05-13 03:11:25.316364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.581 [2024-05-13 03:11:25.316384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.581 [2024-05-13 03:11:25.332891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.581 [2024-05-13 03:11:25.332920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.581 [2024-05-13 03:11:25.332937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.581 [2024-05-13 03:11:25.349676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.581 [2024-05-13 03:11:25.349722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.581 [2024-05-13 03:11:25.349756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.581 [2024-05-13 03:11:25.366144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.581 [2024-05-13 03:11:25.366179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.581 [2024-05-13 03:11:25.366205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.382774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.382824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.382843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.399582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.399618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.399640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.415663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.415707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.415744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.431918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.431949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.431965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.447601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.447637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.447656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.463284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.463320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.463340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.479008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.479056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.479075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.495560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.495596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.495616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.511748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.511782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.511800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.528033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.528069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.528090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.544294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.544330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.544350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.560709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.560756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.560773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.576917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.576947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.576964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.593310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.593345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.593365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.610314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.610350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.610370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.839 [2024-05-13 03:11:25.627032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:34.839 [2024-05-13 03:11:25.627068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.839 [2024-05-13 03:11:25.627087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.644416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.644467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.644490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.660635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.660671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.660691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.676688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.676744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.676771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.692989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.693024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.693067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.709385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.709420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.709439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.725575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.725610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.725630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.741677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.741742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.741769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.758019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.758049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.758083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.774459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.774495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.774515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.790783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.790812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.790841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.807155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.807189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.807209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.823765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.823794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.823814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.840510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.840547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.840567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.857186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.857222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.857241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.873748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.873792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.873809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.098 [2024-05-13 03:11:25.890398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.098 [2024-05-13 03:11:25.890434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.098 [2024-05-13 03:11:25.890453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:25.908241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:25.908276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:25.908296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:25.924679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:25.924721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:25.924755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:25.941218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:25.941259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:25.941279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:25.957634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:25.957670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:25.957690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:25.973749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:25.973778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:25.973795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:25.989847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:25.989877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:25.989895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:26.006222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:26.006259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:26.006282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:26.022261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:26.022296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:26.022315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:26.038546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:26.038581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:26.038600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:26.054672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:26.054727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:26.054766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.357 [2024-05-13 03:11:26.070951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.357 [2024-05-13 03:11:26.070980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.357 [2024-05-13 03:11:26.071001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.358 [2024-05-13 03:11:26.087387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.358 [2024-05-13 03:11:26.087422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.358 [2024-05-13 03:11:26.087442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.358 [2024-05-13 03:11:26.103643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.358 [2024-05-13 03:11:26.103678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.358 [2024-05-13 03:11:26.103706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.358 [2024-05-13 03:11:26.120296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.358 [2024-05-13 03:11:26.120332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.358 [2024-05-13 03:11:26.120352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.358 [2024-05-13 03:11:26.137133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.358 [2024-05-13 03:11:26.137167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.358 [2024-05-13 03:11:26.137187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.358 [2024-05-13 03:11:26.153835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.358 [2024-05-13 03:11:26.153865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.358 [2024-05-13 03:11:26.153883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.616 [2024-05-13 03:11:26.170898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.616 [2024-05-13 03:11:26.170928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.616 [2024-05-13 03:11:26.170945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.616 [2024-05-13 03:11:26.187361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.616 [2024-05-13 03:11:26.187397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.616 [2024-05-13 03:11:26.187417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.616 [2024-05-13 03:11:26.203610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.616 [2024-05-13 03:11:26.203645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.616 [2024-05-13 03:11:26.203665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.616 [2024-05-13 03:11:26.220254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.616 [2024-05-13 03:11:26.220298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.616 [2024-05-13 03:11:26.220319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.616 [2024-05-13 03:11:26.236498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.616 [2024-05-13 03:11:26.236534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.616 [2024-05-13 03:11:26.236553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.616 [2024-05-13 03:11:26.252716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.616 [2024-05-13 03:11:26.252771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.616 [2024-05-13 03:11:26.252788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.616 [2024-05-13 03:11:26.268999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.616 [2024-05-13 03:11:26.269046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.616 [2024-05-13 03:11:26.269066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.616 [2024-05-13 03:11:26.285606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.616 [2024-05-13 03:11:26.285641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.616 [2024-05-13 03:11:26.285662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.616 [2024-05-13 03:11:26.301754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.617 [2024-05-13 03:11:26.301784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-05-13 03:11:26.301801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.617 [2024-05-13 03:11:26.318356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.617 [2024-05-13 03:11:26.318393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-05-13 03:11:26.318413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.617 [2024-05-13 03:11:26.334812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.617 [2024-05-13 03:11:26.334843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-05-13 03:11:26.334860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.617 [2024-05-13 03:11:26.351262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.617 [2024-05-13 03:11:26.351298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-05-13 03:11:26.351318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.617 [2024-05-13 03:11:26.367506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.617 [2024-05-13 03:11:26.367541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-05-13 03:11:26.367561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.617 [2024-05-13 03:11:26.383977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.617 [2024-05-13 03:11:26.384026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-05-13 03:11:26.384046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.617 [2024-05-13 03:11:26.400592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.617 [2024-05-13 03:11:26.400627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-05-13 03:11:26.400646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.617 [2024-05-13 03:11:26.417679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.617 [2024-05-13 03:11:26.417723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.617 [2024-05-13 03:11:26.417755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.874 [2024-05-13 03:11:26.434444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.874 [2024-05-13 03:11:26.434480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.874 [2024-05-13 03:11:26.434500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.874 [2024-05-13 03:11:26.450669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1de21e0) 00:30:35.874 [2024-05-13 03:11:26.450716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.874 [2024-05-13 03:11:26.450751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.874 00:30:35.874 Latency(us) 00:30:35.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.874 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:35.874 nvme0n1 : 2.01 1861.76 232.72 0.00 0.00 8589.16 7670.14 19515.16 00:30:35.874 =================================================================================================================== 00:30:35.874 Total : 1861.76 232.72 0.00 0.00 8589.16 7670.14 19515.16 00:30:35.874 0 00:30:35.874 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:35.874 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:35.874 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:35.874 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:35.874 | .driver_specific 00:30:35.874 | .nvme_error 00:30:35.874 | .status_code 00:30:35.874 | .command_transient_transport_error' 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 478524 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 478524 ']' 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 478524 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478524 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478524' 00:30:36.132 killing process with pid 478524 00:30:36.132 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 478524 00:30:36.132 Received shutdown signal, test time was about 2.000000 seconds 00:30:36.132 00:30:36.132 Latency(us) 00:30:36.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.133 =================================================================================================================== 00:30:36.133 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:36.133 03:11:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 478524 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=478939 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 478939 /var/tmp/bperf.sock 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 478939 ']' 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:36.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:36.391 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.391 [2024-05-13 03:11:27.068333] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:36.391 [2024-05-13 03:11:27.068430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478939 ] 00:30:36.391 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.391 [2024-05-13 03:11:27.101244] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:36.391 [2024-05-13 03:11:27.128855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.650 [2024-05-13 03:11:27.218193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.650 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:36.650 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:36.650 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:36.650 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:36.908 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:36.908 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.908 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.908 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.908 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.908 03:11:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:37.476 nvme0n1 00:30:37.476 03:11:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:37.476 03:11:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.476 03:11:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:37.476 03:11:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.476 03:11:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:37.476 03:11:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:37.476 Running I/O for 2 seconds... 00:30:37.476 [2024-05-13 03:11:28.254916] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.476 [2024-05-13 03:11:28.255354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.476 [2024-05-13 03:11:28.255395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.476 [2024-05-13 03:11:28.268977] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.476 [2024-05-13 03:11:28.269327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.476 [2024-05-13 03:11:28.269373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.282634] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.283023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.283053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.296628] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.296987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.297040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.310735] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.311113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.311142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.324704] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.325069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.325112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.338796] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.339173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.339202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.352751] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.353113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.353145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.366623] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.366996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.367025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.380614] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.380982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.381011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.394578] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.394935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.394964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.408494] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.408837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.408880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.422483] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.734 [2024-05-13 03:11:28.422839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.734 [2024-05-13 03:11:28.422868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.734 [2024-05-13 03:11:28.436439] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.735 [2024-05-13 03:11:28.436791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.735 [2024-05-13 03:11:28.436820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.735 [2024-05-13 03:11:28.450379] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.735 [2024-05-13 03:11:28.450764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.735 [2024-05-13 03:11:28.450806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.735 [2024-05-13 03:11:28.464293] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.735 [2024-05-13 03:11:28.464635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.735 [2024-05-13 03:11:28.464666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.735 [2024-05-13 03:11:28.478217] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.735 [2024-05-13 03:11:28.478595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.735 [2024-05-13 03:11:28.478627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.735 [2024-05-13 03:11:28.492159] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.735 [2024-05-13 03:11:28.492496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.735 [2024-05-13 03:11:28.492527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.735 [2024-05-13 03:11:28.505996] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.735 [2024-05-13 03:11:28.506366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.735 [2024-05-13 03:11:28.506397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.735 [2024-05-13 03:11:28.519882] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.735 [2024-05-13 03:11:28.520255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.735 [2024-05-13 03:11:28.520286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.735 [2024-05-13 03:11:28.533780] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.735 [2024-05-13 03:11:28.534158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.735 [2024-05-13 03:11:28.534190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.547489] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.547834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.547864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.561458] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.561819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.561847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.575392] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.575750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.575778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.589343] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.589687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.589741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.603250] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.603593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.603624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.617108] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.617449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.617480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.631011] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.631352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.631382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.644836] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.645178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.645209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.658687] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.659070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.659106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.672583] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.672925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.672968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.686381] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.686721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.686764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.700265] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.700603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.700633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.714209] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.714549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.714580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.728152] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.728491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.728522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.742100] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.742442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.742477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.756035] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.756408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.756439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.769913] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.770290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.770322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.993 [2024-05-13 03:11:28.783858] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:37.993 [2024-05-13 03:11:28.784231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.993 [2024-05-13 03:11:28.784268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.251 [2024-05-13 03:11:28.797455] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.251 [2024-05-13 03:11:28.797808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.251 [2024-05-13 03:11:28.797837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.251 [2024-05-13 03:11:28.810998] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.251 [2024-05-13 03:11:28.811338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.251 [2024-05-13 03:11:28.811369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.251 [2024-05-13 03:11:28.824813] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.251 [2024-05-13 03:11:28.825183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.251 [2024-05-13 03:11:28.825215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.251 [2024-05-13 03:11:28.838767] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.251 [2024-05-13 03:11:28.839154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.251 [2024-05-13 03:11:28.839185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.251 [2024-05-13 03:11:28.852538] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.251 [2024-05-13 03:11:28.852897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.251 [2024-05-13 03:11:28.852925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.251 [2024-05-13 03:11:28.866406] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.251 [2024-05-13 03:11:28.866761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.251 [2024-05-13 03:11:28.866806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.251 [2024-05-13 03:11:28.880336] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.251 [2024-05-13 03:11:28.880677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.251 [2024-05-13 03:11:28.880718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.251 [2024-05-13 03:11:28.894293] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.251 [2024-05-13 03:11:28.894633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:28.894665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:28.908201] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:28.908548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:28.908580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:28.922064] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:28.922407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:28.922438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:28.935935] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:28.936299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:28.936329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:28.949853] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:28.950222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:28.950254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:28.963653] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:28.964020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:28.964052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:28.977517] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:28.977881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:28.977910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:28.991408] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:28.991760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:28.991788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:29.005293] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:29.005635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:29.005667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:29.019151] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:29.019491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:29.019525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:29.032969] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:29.033326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:29.033360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.252 [2024-05-13 03:11:29.046803] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.252 [2024-05-13 03:11:29.047264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.252 [2024-05-13 03:11:29.047298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.060521] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.060883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.060913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.074387] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.074729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.074772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.088236] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.088579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.088611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.101997] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.102379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.102410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.115747] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.116081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.116112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.129310] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.129649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.129680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.143056] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.143407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.143442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.156915] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.157260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.157290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.170705] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.171076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.171106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.184594] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.184959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.185001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.198481] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.198838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.198865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.212318] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.212660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.212693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.226197] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.226539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.226569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.240096] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.240436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.240466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.253945] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.254317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.254347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.267804] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.268180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.268213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.281761] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.282109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.282140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.295560] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.295919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.295962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.510 [2024-05-13 03:11:29.309401] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.510 [2024-05-13 03:11:29.309759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.510 [2024-05-13 03:11:29.309788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.323093] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.323428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.323459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.336942] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.337294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.337325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.350942] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.351323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.351353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.364844] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.365183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.365213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.378686] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.379072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.379103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.392535] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.392921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.392947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.406361] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.406715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.406746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.420213] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.420549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.420580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.434074] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.434414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.434444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.447939] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.448305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.448336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.461855] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.462198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.462228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.475692] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.476075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.476106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.489499] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.489858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.489885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.503336] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.503676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.503715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.517190] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.517532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.517562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.530962] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.531316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.531346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.544829] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.545186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.545218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.768 [2024-05-13 03:11:29.558648] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:38.768 [2024-05-13 03:11:29.559042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.768 [2024-05-13 03:11:29.559074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.025 [2024-05-13 03:11:29.572289] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.025 [2024-05-13 03:11:29.572632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.025 [2024-05-13 03:11:29.572663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.025 [2024-05-13 03:11:29.586101] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.025 [2024-05-13 03:11:29.586438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.025 [2024-05-13 03:11:29.586469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.025 [2024-05-13 03:11:29.599985] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.025 [2024-05-13 03:11:29.600340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.025 [2024-05-13 03:11:29.600370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.025 [2024-05-13 03:11:29.613879] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.025 [2024-05-13 03:11:29.614259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.025 [2024-05-13 03:11:29.614292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.025 [2024-05-13 03:11:29.627718] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.025 [2024-05-13 03:11:29.628076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.025 [2024-05-13 03:11:29.628111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.025 [2024-05-13 03:11:29.641486] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.025 [2024-05-13 03:11:29.641860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.025 [2024-05-13 03:11:29.641898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.025 [2024-05-13 03:11:29.655253] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.025 [2024-05-13 03:11:29.655592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.025 [2024-05-13 03:11:29.655622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.025 [2024-05-13 03:11:29.669095] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.025 [2024-05-13 03:11:29.669436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.669467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.682928] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.683299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.683338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.696849] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.697210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.697251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.710646] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.711037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.711078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.724558] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.724966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.724994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.738430] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.738778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.738820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.752325] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.752705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.752763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.766178] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.766521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.766552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.780073] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.780410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.780440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.793908] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.794277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.794307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.807720] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.808075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.808106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.026 [2024-05-13 03:11:29.821541] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.026 [2024-05-13 03:11:29.821917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.026 [2024-05-13 03:11:29.821944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.835148] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.835488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.835518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.848929] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.849291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.849322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.862877] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.863250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.863280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.876720] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.877097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.877127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.890542] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.890903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.890944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.904411] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.904824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.904852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.918329] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.918668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.918712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.932100] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.932444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.932476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.946086] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.946420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.946454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.959896] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.960271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.284 [2024-05-13 03:11:29.960304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.284 [2024-05-13 03:11:29.973814] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.284 [2024-05-13 03:11:29.974255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.285 [2024-05-13 03:11:29.974286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.285 [2024-05-13 03:11:29.987425] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.285 [2024-05-13 03:11:29.987774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.285 [2024-05-13 03:11:29.987802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.285 [2024-05-13 03:11:30.001005] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.285 [2024-05-13 03:11:30.001312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.285 [2024-05-13 03:11:30.001340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.285 [2024-05-13 03:11:30.014518] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.285 [2024-05-13 03:11:30.014877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.285 [2024-05-13 03:11:30.014919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.285 [2024-05-13 03:11:30.028466] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.285 [2024-05-13 03:11:30.028824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.285 [2024-05-13 03:11:30.028865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.285 [2024-05-13 03:11:30.042511] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.285 [2024-05-13 03:11:30.042888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.285 [2024-05-13 03:11:30.042930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.285 [2024-05-13 03:11:30.056479] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.285 [2024-05-13 03:11:30.056839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.285 [2024-05-13 03:11:30.056871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.285 [2024-05-13 03:11:30.070480] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.285 [2024-05-13 03:11:30.070843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.285 [2024-05-13 03:11:30.070874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.285 [2024-05-13 03:11:30.084437] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.285 [2024-05-13 03:11:30.084795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.285 [2024-05-13 03:11:30.084824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.098232] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.098575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.111884] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.112230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.112272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.125625] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.126017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.126062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.139453] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.139804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.139832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.153221] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.153559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.153590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.167036] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.167394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.167425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.180918] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.181279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.181309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.194512] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.194867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.194896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.208192] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.208575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.208605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.221843] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.222193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.222224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 [2024-05-13 03:11:30.235505] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x793f20) with pdu=0x2000190fef90 00:30:39.543 [2024-05-13 03:11:30.235884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.543 [2024-05-13 03:11:30.235917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.543 00:30:39.543 Latency(us) 00:30:39.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.543 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:39.543 nvme0n1 : 2.01 18373.56 71.77 0.00 0.00 6949.96 5922.51 16311.18 00:30:39.543 =================================================================================================================== 00:30:39.543 Total : 18373.56 71.77 0.00 0.00 6949.96 5922.51 16311.18 00:30:39.543 0 00:30:39.543 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:39.543 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:39.543 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:39.543 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:39.543 | .driver_specific 00:30:39.543 | .nvme_error 00:30:39.543 | .status_code 00:30:39.543 | .command_transient_transport_error' 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 478939 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 478939 ']' 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 478939 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478939 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478939' 00:30:39.802 killing process with pid 478939 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 478939 00:30:39.802 Received shutdown signal, test time was about 2.000000 seconds 00:30:39.802 00:30:39.802 Latency(us) 00:30:39.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.802 =================================================================================================================== 00:30:39.802 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:39.802 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 478939 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=479343 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 479343 /var/tmp/bperf.sock 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 479343 ']' 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:40.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:40.060 03:11:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:40.060 [2024-05-13 03:11:30.824835] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:40.060 [2024-05-13 03:11:30.824930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479343 ] 00:30:40.060 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:40.060 Zero copy mechanism will not be used. 00:30:40.060 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.060 [2024-05-13 03:11:30.856309] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:40.319 [2024-05-13 03:11:30.887903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.319 [2024-05-13 03:11:30.976201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.319 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:40.319 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:40.319 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:40.320 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:40.578 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:40.578 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.578 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:40.578 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.578 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:40.578 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:41.149 nvme0n1 00:30:41.149 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:41.149 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.149 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:41.149 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.149 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:41.149 03:11:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:41.149 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:41.149 Zero copy mechanism will not be used. 00:30:41.149 Running I/O for 2 seconds... 00:30:41.149 [2024-05-13 03:11:31.824344] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.150 [2024-05-13 03:11:31.824867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.150 [2024-05-13 03:11:31.824906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.150 [2024-05-13 03:11:31.852431] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.150 [2024-05-13 03:11:31.853013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.150 [2024-05-13 03:11:31.853042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.150 [2024-05-13 03:11:31.881967] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.150 [2024-05-13 03:11:31.882722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.150 [2024-05-13 03:11:31.882750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.150 [2024-05-13 03:11:31.913688] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.150 [2024-05-13 03:11:31.914595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.150 [2024-05-13 03:11:31.914624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.150 [2024-05-13 03:11:31.945315] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.150 [2024-05-13 03:11:31.945856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.150 [2024-05-13 03:11:31.945884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.407 [2024-05-13 03:11:31.975646] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.407 [2024-05-13 03:11:31.976275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.407 [2024-05-13 03:11:31.976305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.407 [2024-05-13 03:11:32.005536] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.407 [2024-05-13 03:11:32.006271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.408 [2024-05-13 03:11:32.006300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.408 [2024-05-13 03:11:32.036143] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.408 [2024-05-13 03:11:32.036931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.408 [2024-05-13 03:11:32.036961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.408 [2024-05-13 03:11:32.066027] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.408 [2024-05-13 03:11:32.066460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.408 [2024-05-13 03:11:32.066510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.408 [2024-05-13 03:11:32.096817] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.408 [2024-05-13 03:11:32.097621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.408 [2024-05-13 03:11:32.097649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.408 [2024-05-13 03:11:32.125969] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.408 [2024-05-13 03:11:32.126653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.408 [2024-05-13 03:11:32.126701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.408 [2024-05-13 03:11:32.155339] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.408 [2024-05-13 03:11:32.155868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.408 [2024-05-13 03:11:32.155897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.408 [2024-05-13 03:11:32.186162] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.408 [2024-05-13 03:11:32.187060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.408 [2024-05-13 03:11:32.187089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.665 [2024-05-13 03:11:32.214542] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.665 [2024-05-13 03:11:32.215200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.665 [2024-05-13 03:11:32.215230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.665 [2024-05-13 03:11:32.243557] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.665 [2024-05-13 03:11:32.244375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.665 [2024-05-13 03:11:32.244404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.665 [2024-05-13 03:11:32.274035] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.665 [2024-05-13 03:11:32.274646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.665 [2024-05-13 03:11:32.274689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.665 [2024-05-13 03:11:32.302382] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.665 [2024-05-13 03:11:32.302800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.665 [2024-05-13 03:11:32.302828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.665 [2024-05-13 03:11:32.327571] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.666 [2024-05-13 03:11:32.328174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.666 [2024-05-13 03:11:32.328203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.666 [2024-05-13 03:11:32.357462] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.666 [2024-05-13 03:11:32.358080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.666 [2024-05-13 03:11:32.358111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.666 [2024-05-13 03:11:32.386008] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.666 [2024-05-13 03:11:32.386598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.666 [2024-05-13 03:11:32.386625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.666 [2024-05-13 03:11:32.414332] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.666 [2024-05-13 03:11:32.414852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.666 [2024-05-13 03:11:32.414881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.666 [2024-05-13 03:11:32.444288] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.666 [2024-05-13 03:11:32.444990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.666 [2024-05-13 03:11:32.445018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.923 [2024-05-13 03:11:32.473810] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.923 [2024-05-13 03:11:32.474418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.923 [2024-05-13 03:11:32.474447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.923 [2024-05-13 03:11:32.504088] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.923 [2024-05-13 03:11:32.504912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.923 [2024-05-13 03:11:32.504943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.924 [2024-05-13 03:11:32.533133] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.924 [2024-05-13 03:11:32.533642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.924 [2024-05-13 03:11:32.533670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.924 [2024-05-13 03:11:32.563982] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.924 [2024-05-13 03:11:32.564578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.924 [2024-05-13 03:11:32.564606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.924 [2024-05-13 03:11:32.594333] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.924 [2024-05-13 03:11:32.594849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.924 [2024-05-13 03:11:32.594878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.924 [2024-05-13 03:11:32.624639] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.924 [2024-05-13 03:11:32.625532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.924 [2024-05-13 03:11:32.625560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.924 [2024-05-13 03:11:32.655053] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.924 [2024-05-13 03:11:32.655864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.924 [2024-05-13 03:11:32.655893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.924 [2024-05-13 03:11:32.684234] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.924 [2024-05-13 03:11:32.685131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.924 [2024-05-13 03:11:32.685159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.924 [2024-05-13 03:11:32.714889] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:41.924 [2024-05-13 03:11:32.715342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.924 [2024-05-13 03:11:32.715386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.182 [2024-05-13 03:11:32.745256] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.182 [2024-05-13 03:11:32.745865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.182 [2024-05-13 03:11:32.745894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.182 [2024-05-13 03:11:32.773670] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.182 [2024-05-13 03:11:32.774536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.182 [2024-05-13 03:11:32.774564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.182 [2024-05-13 03:11:32.803173] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.182 [2024-05-13 03:11:32.803901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.182 [2024-05-13 03:11:32.803930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.182 [2024-05-13 03:11:32.832477] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.182 [2024-05-13 03:11:32.832997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.182 [2024-05-13 03:11:32.833029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.182 [2024-05-13 03:11:32.863493] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.182 [2024-05-13 03:11:32.864137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.182 [2024-05-13 03:11:32.864165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.182 [2024-05-13 03:11:32.893244] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.182 [2024-05-13 03:11:32.893964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.182 [2024-05-13 03:11:32.893993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.182 [2024-05-13 03:11:32.923592] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.182 [2024-05-13 03:11:32.924310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.182 [2024-05-13 03:11:32.924338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.182 [2024-05-13 03:11:32.950784] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.182 [2024-05-13 03:11:32.951304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.182 [2024-05-13 03:11:32.951331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.182 [2024-05-13 03:11:32.976724] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.182 [2024-05-13 03:11:32.977319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.182 [2024-05-13 03:11:32.977347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.441 [2024-05-13 03:11:33.006984] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.441 [2024-05-13 03:11:33.007503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.441 [2024-05-13 03:11:33.007532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.441 [2024-05-13 03:11:33.037097] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.441 [2024-05-13 03:11:33.037856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.441 [2024-05-13 03:11:33.037884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.441 [2024-05-13 03:11:33.066374] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.441 [2024-05-13 03:11:33.067067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.441 [2024-05-13 03:11:33.067095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.441 [2024-05-13 03:11:33.096415] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.441 [2024-05-13 03:11:33.097071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.441 [2024-05-13 03:11:33.097117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.441 [2024-05-13 03:11:33.124025] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.441 [2024-05-13 03:11:33.124850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.441 [2024-05-13 03:11:33.124879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.441 [2024-05-13 03:11:33.153712] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.441 [2024-05-13 03:11:33.154602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.441 [2024-05-13 03:11:33.154632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.441 [2024-05-13 03:11:33.182144] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.441 [2024-05-13 03:11:33.182772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.441 [2024-05-13 03:11:33.182800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.441 [2024-05-13 03:11:33.212960] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.441 [2024-05-13 03:11:33.213764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.441 [2024-05-13 03:11:33.213792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.699 [2024-05-13 03:11:33.244250] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.699 [2024-05-13 03:11:33.244861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.699 [2024-05-13 03:11:33.244892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.699 [2024-05-13 03:11:33.273399] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.699 [2024-05-13 03:11:33.274280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.699 [2024-05-13 03:11:33.274308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.699 [2024-05-13 03:11:33.304242] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.699 [2024-05-13 03:11:33.304967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.699 [2024-05-13 03:11:33.304997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.699 [2024-05-13 03:11:33.332860] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.699 [2024-05-13 03:11:33.333406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.699 [2024-05-13 03:11:33.333435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.699 [2024-05-13 03:11:33.361737] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.699 [2024-05-13 03:11:33.362406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.699 [2024-05-13 03:11:33.362435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.699 [2024-05-13 03:11:33.392496] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.699 [2024-05-13 03:11:33.393080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.699 [2024-05-13 03:11:33.393107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.700 [2024-05-13 03:11:33.420547] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.700 [2024-05-13 03:11:33.421318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.700 [2024-05-13 03:11:33.421345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.700 [2024-05-13 03:11:33.452024] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.700 [2024-05-13 03:11:33.452872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.700 [2024-05-13 03:11:33.452900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.700 [2024-05-13 03:11:33.480824] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.700 [2024-05-13 03:11:33.481460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.700 [2024-05-13 03:11:33.481487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.957 [2024-05-13 03:11:33.507826] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.957 [2024-05-13 03:11:33.508358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-05-13 03:11:33.508385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.957 [2024-05-13 03:11:33.535173] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.957 [2024-05-13 03:11:33.536014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-05-13 03:11:33.536042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.957 [2024-05-13 03:11:33.567549] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.957 [2024-05-13 03:11:33.568537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-05-13 03:11:33.568564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.957 [2024-05-13 03:11:33.597945] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.958 [2024-05-13 03:11:33.598827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-05-13 03:11:33.598858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.958 [2024-05-13 03:11:33.627095] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.958 [2024-05-13 03:11:33.627542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-05-13 03:11:33.627569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.958 [2024-05-13 03:11:33.656170] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.958 [2024-05-13 03:11:33.656718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-05-13 03:11:33.656744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.958 [2024-05-13 03:11:33.684081] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.958 [2024-05-13 03:11:33.684848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-05-13 03:11:33.684874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.958 [2024-05-13 03:11:33.712816] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.958 [2024-05-13 03:11:33.713452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-05-13 03:11:33.713478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.958 [2024-05-13 03:11:33.740148] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:42.958 [2024-05-13 03:11:33.740831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.958 [2024-05-13 03:11:33.740867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.215 [2024-05-13 03:11:33.770524] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:43.215 [2024-05-13 03:11:33.771180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.215 [2024-05-13 03:11:33.771207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.215 [2024-05-13 03:11:33.799500] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x794260) with pdu=0x2000190fef90 00:30:43.215 [2024-05-13 03:11:33.800237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.215 [2024-05-13 03:11:33.800263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.215 00:30:43.215 Latency(us) 00:30:43.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.215 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:43.215 nvme0n1 : 2.01 1051.82 131.48 0.00 0.00 15154.57 7087.60 33010.73 00:30:43.215 =================================================================================================================== 00:30:43.215 Total : 1051.82 131.48 0.00 0.00 15154.57 7087.60 33010.73 00:30:43.215 0 00:30:43.215 03:11:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:43.215 03:11:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:43.215 03:11:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:43.215 | .driver_specific 00:30:43.215 | .nvme_error 00:30:43.215 | .status_code 00:30:43.215 | .command_transient_transport_error' 00:30:43.215 03:11:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:43.472 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 68 > 0 )) 00:30:43.472 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 479343 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 479343 ']' 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 479343 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 479343 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 479343' 00:30:43.473 killing process with pid 479343 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 479343 00:30:43.473 Received shutdown signal, test time was about 2.000000 seconds 00:30:43.473 00:30:43.473 Latency(us) 00:30:43.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.473 =================================================================================================================== 00:30:43.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:43.473 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 479343 00:30:43.730 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 477978 00:30:43.730 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 477978 ']' 00:30:43.730 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 477978 00:30:43.730 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:43.730 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:43.730 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 477978 00:30:43.730 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:43.731 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:43.731 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 477978' 00:30:43.731 killing process with pid 477978 00:30:43.731 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 477978 00:30:43.731 [2024-05-13 03:11:34.363207] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:43.731 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 477978 00:30:43.989 00:30:43.989 real 0m15.334s 00:30:43.989 user 0m30.917s 00:30:43.989 sys 0m3.895s 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:43.989 ************************************ 00:30:43.989 END TEST nvmf_digest_error 00:30:43.989 ************************************ 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:43.989 rmmod nvme_tcp 00:30:43.989 rmmod nvme_fabrics 00:30:43.989 rmmod nvme_keyring 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 477978 ']' 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 477978 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 477978 ']' 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 477978 00:30:43.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (477978) - No such process 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 477978 is not found' 00:30:43.989 Process with pid 477978 is not found 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:43.989 03:11:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.524 03:11:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:46.524 00:30:46.524 real 0m35.026s 00:30:46.524 user 1m2.818s 00:30:46.524 sys 0m9.080s 00:30:46.524 03:11:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:46.524 03:11:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.524 ************************************ 00:30:46.524 END TEST nvmf_digest 00:30:46.524 ************************************ 00:30:46.524 03:11:36 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:30:46.524 03:11:36 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:30:46.524 03:11:36 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:30:46.524 03:11:36 nvmf_tcp -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:46.524 03:11:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:46.524 03:11:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:46.524 03:11:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.524 ************************************ 00:30:46.524 START TEST nvmf_bdevperf 00:30:46.524 ************************************ 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:46.524 * Looking for test storage... 00:30:46.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.524 03:11:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:46.525 03:11:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:48.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:48.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:48.480 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:48.480 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.480 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:48.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:30:48.481 00:30:48.481 --- 10.0.0.2 ping statistics --- 00:30:48.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.481 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:30:48.481 00:30:48.481 --- 10.0.0.1 ping statistics --- 00:30:48.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.481 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=481690 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 481690 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 481690 ']' 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:48.481 03:11:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.481 [2024-05-13 03:11:38.986833] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:48.481 [2024-05-13 03:11:38.986926] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.481 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.481 [2024-05-13 03:11:39.026619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:48.481 [2024-05-13 03:11:39.054514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:48.481 [2024-05-13 03:11:39.141684] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.481 [2024-05-13 03:11:39.141745] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.481 [2024-05-13 03:11:39.141775] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.481 [2024-05-13 03:11:39.141787] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.481 [2024-05-13 03:11:39.141798] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.481 [2024-05-13 03:11:39.142136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.481 [2024-05-13 03:11:39.142197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:48.481 [2024-05-13 03:11:39.142200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.481 [2024-05-13 03:11:39.273779] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.481 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.739 Malloc0 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.739 [2024-05-13 03:11:39.333288] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:48.739 [2024-05-13 03:11:39.333566] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:48.739 { 00:30:48.739 "params": { 00:30:48.739 "name": "Nvme$subsystem", 00:30:48.739 "trtype": "$TEST_TRANSPORT", 00:30:48.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.739 "adrfam": "ipv4", 00:30:48.739 "trsvcid": "$NVMF_PORT", 00:30:48.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.739 "hdgst": ${hdgst:-false}, 00:30:48.739 "ddgst": ${ddgst:-false} 00:30:48.739 }, 00:30:48.739 "method": "bdev_nvme_attach_controller" 00:30:48.739 } 00:30:48.739 EOF 00:30:48.739 )") 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:48.739 03:11:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:48.739 "params": { 00:30:48.739 "name": "Nvme1", 00:30:48.739 "trtype": "tcp", 00:30:48.739 "traddr": "10.0.0.2", 00:30:48.739 "adrfam": "ipv4", 00:30:48.739 "trsvcid": "4420", 00:30:48.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:48.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:48.739 "hdgst": false, 00:30:48.739 "ddgst": false 00:30:48.739 }, 00:30:48.739 "method": "bdev_nvme_attach_controller" 00:30:48.739 }' 00:30:48.740 [2024-05-13 03:11:39.378281] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:48.740 [2024-05-13 03:11:39.378358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481837 ] 00:30:48.740 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.740 [2024-05-13 03:11:39.410307] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:48.740 [2024-05-13 03:11:39.438391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.740 [2024-05-13 03:11:39.525420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.997 Running I/O for 1 seconds... 00:30:50.371 00:30:50.371 Latency(us) 00:30:50.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.371 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:50.371 Verification LBA range: start 0x0 length 0x4000 00:30:50.371 Nvme1n1 : 1.01 9046.05 35.34 0.00 0.00 14078.66 1674.81 15825.73 00:30:50.371 =================================================================================================================== 00:30:50.371 Total : 9046.05 35.34 0.00 0.00 14078.66 1674.81 15825.73 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=481975 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.371 { 00:30:50.371 "params": { 00:30:50.371 "name": "Nvme$subsystem", 00:30:50.371 "trtype": "$TEST_TRANSPORT", 00:30:50.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.371 "adrfam": "ipv4", 00:30:50.371 "trsvcid": "$NVMF_PORT", 00:30:50.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.371 "hdgst": ${hdgst:-false}, 00:30:50.371 "ddgst": ${ddgst:-false} 00:30:50.371 }, 00:30:50.371 "method": "bdev_nvme_attach_controller" 00:30:50.371 } 00:30:50.371 EOF 00:30:50.371 )") 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:50.371 03:11:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:50.371 "params": { 00:30:50.371 "name": "Nvme1", 00:30:50.371 "trtype": "tcp", 00:30:50.371 "traddr": "10.0.0.2", 00:30:50.371 "adrfam": "ipv4", 00:30:50.371 "trsvcid": "4420", 00:30:50.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.372 "hdgst": false, 00:30:50.372 "ddgst": false 00:30:50.372 }, 00:30:50.372 "method": "bdev_nvme_attach_controller" 00:30:50.372 }' 00:30:50.372 [2024-05-13 03:11:41.064658] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:50.372 [2024-05-13 03:11:41.064768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481975 ] 00:30:50.372 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.372 [2024-05-13 03:11:41.097831] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:50.372 [2024-05-13 03:11:41.127471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.630 [2024-05-13 03:11:41.226793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.630 Running I/O for 15 seconds... 00:30:53.917 03:11:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 481690 00:30:53.917 03:11:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:53.917 [2024-05-13 03:11:44.034576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.034627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.034660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.034679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.034708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.034729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.034763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.034779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.034802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.034818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.034835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.034849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.034866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.034880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.034897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.034912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.034928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.034942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.034967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.035968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.035998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.036017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.917 [2024-05-13 03:11:44.036033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.036050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.036065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.036081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.036097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.917 [2024-05-13 03:11:44.036114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.917 [2024-05-13 03:11:44.036129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.918 [2024-05-13 03:11:44.036836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.036968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.036998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.918 [2024-05-13 03:11:44.037472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.918 [2024-05-13 03:11:44.037487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.037955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.037970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.919 [2024-05-13 03:11:44.038873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.919 [2024-05-13 03:11:44.038888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.038908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.038924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.038939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.038955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.038969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.038999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.039012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.039027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.039039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.039072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.039088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.039105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.039120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.039138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.039154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.039171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.039187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.039204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-05-13 03:11:44.039220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.039237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe64060 is same with the state(5) to be set 00:30:53.920 [2024-05-13 03:11:44.039256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:53.920 [2024-05-13 03:11:44.039270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:53.920 [2024-05-13 03:11:44.039283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53312 len:8 PRP1 0x0 PRP2 0x0 00:30:53.920 [2024-05-13 03:11:44.039298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.920 [2024-05-13 03:11:44.039366] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe64060 was disconnected and freed. reset controller. 00:30:53.920 [2024-05-13 03:11:44.043298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.920 [2024-05-13 03:11:44.043380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.920 [2024-05-13 03:11:44.044146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.044402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.044433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.920 [2024-05-13 03:11:44.044451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.920 [2024-05-13 03:11:44.044706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.920 [2024-05-13 03:11:44.044960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.920 [2024-05-13 03:11:44.044998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.920 [2024-05-13 03:11:44.045019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.920 [2024-05-13 03:11:44.048646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.920 [2024-05-13 03:11:44.057603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.920 [2024-05-13 03:11:44.058093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.058359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.058388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.920 [2024-05-13 03:11:44.058407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.920 [2024-05-13 03:11:44.058649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.920 [2024-05-13 03:11:44.058905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.920 [2024-05-13 03:11:44.058930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.920 [2024-05-13 03:11:44.058946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.920 [2024-05-13 03:11:44.062558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.920 [2024-05-13 03:11:44.071523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.920 [2024-05-13 03:11:44.072025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.072270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.072297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.920 [2024-05-13 03:11:44.072315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.920 [2024-05-13 03:11:44.072557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.920 [2024-05-13 03:11:44.072814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.920 [2024-05-13 03:11:44.072839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.920 [2024-05-13 03:11:44.072855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.920 [2024-05-13 03:11:44.076475] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.920 [2024-05-13 03:11:44.085436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.920 [2024-05-13 03:11:44.085935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.086206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.086235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.920 [2024-05-13 03:11:44.086252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.920 [2024-05-13 03:11:44.086493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.920 [2024-05-13 03:11:44.086748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.920 [2024-05-13 03:11:44.086772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.920 [2024-05-13 03:11:44.086788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.920 [2024-05-13 03:11:44.090402] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.920 [2024-05-13 03:11:44.099373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.920 [2024-05-13 03:11:44.099896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.100137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.100162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.920 [2024-05-13 03:11:44.100178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.920 [2024-05-13 03:11:44.100435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.920 [2024-05-13 03:11:44.100680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.920 [2024-05-13 03:11:44.100712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.920 [2024-05-13 03:11:44.100729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.920 [2024-05-13 03:11:44.104339] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.920 [2024-05-13 03:11:44.113374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.920 [2024-05-13 03:11:44.113890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.114138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.114164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.920 [2024-05-13 03:11:44.114179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.920 [2024-05-13 03:11:44.114435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.920 [2024-05-13 03:11:44.114680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.920 [2024-05-13 03:11:44.114711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.920 [2024-05-13 03:11:44.114727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.920 [2024-05-13 03:11:44.118341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.920 [2024-05-13 03:11:44.127328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.920 [2024-05-13 03:11:44.127830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.128058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.920 [2024-05-13 03:11:44.128086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.920 [2024-05-13 03:11:44.128103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.920 [2024-05-13 03:11:44.128344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.920 [2024-05-13 03:11:44.128589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.921 [2024-05-13 03:11:44.128613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.921 [2024-05-13 03:11:44.128628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.921 [2024-05-13 03:11:44.132265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.921 [2024-05-13 03:11:44.141233] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.921 [2024-05-13 03:11:44.141733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.141999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.142027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.921 [2024-05-13 03:11:44.142045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.921 [2024-05-13 03:11:44.142287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.921 [2024-05-13 03:11:44.142533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.921 [2024-05-13 03:11:44.142557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.921 [2024-05-13 03:11:44.142572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.921 [2024-05-13 03:11:44.146194] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.921 [2024-05-13 03:11:44.155165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.921 [2024-05-13 03:11:44.155666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.155917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.155946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.921 [2024-05-13 03:11:44.155964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.921 [2024-05-13 03:11:44.156205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.921 [2024-05-13 03:11:44.156450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.921 [2024-05-13 03:11:44.156474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.921 [2024-05-13 03:11:44.156490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.921 [2024-05-13 03:11:44.160118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.921 [2024-05-13 03:11:44.169080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.921 [2024-05-13 03:11:44.169592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.169884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.169911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.921 [2024-05-13 03:11:44.169931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.921 [2024-05-13 03:11:44.170192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.921 [2024-05-13 03:11:44.170438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.921 [2024-05-13 03:11:44.170461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.921 [2024-05-13 03:11:44.170476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.921 [2024-05-13 03:11:44.174102] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.921 [2024-05-13 03:11:44.183065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.921 [2024-05-13 03:11:44.183550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.183800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.183830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.921 [2024-05-13 03:11:44.183848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.921 [2024-05-13 03:11:44.184089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.921 [2024-05-13 03:11:44.184334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.921 [2024-05-13 03:11:44.184358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.921 [2024-05-13 03:11:44.184374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.921 [2024-05-13 03:11:44.188012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.921 [2024-05-13 03:11:44.197007] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.921 [2024-05-13 03:11:44.197489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.197806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.197837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.921 [2024-05-13 03:11:44.197855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.921 [2024-05-13 03:11:44.198097] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.921 [2024-05-13 03:11:44.198343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.921 [2024-05-13 03:11:44.198367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.921 [2024-05-13 03:11:44.198382] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.921 [2024-05-13 03:11:44.202005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.921 [2024-05-13 03:11:44.210975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.921 [2024-05-13 03:11:44.211465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.211748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.211778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.921 [2024-05-13 03:11:44.211795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.921 [2024-05-13 03:11:44.212043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.921 [2024-05-13 03:11:44.212289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.921 [2024-05-13 03:11:44.212313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.921 [2024-05-13 03:11:44.212328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.921 [2024-05-13 03:11:44.215949] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.921 [2024-05-13 03:11:44.224912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.921 [2024-05-13 03:11:44.225402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.225650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.225680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.921 [2024-05-13 03:11:44.225708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.921 [2024-05-13 03:11:44.225955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.921 [2024-05-13 03:11:44.226201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.921 [2024-05-13 03:11:44.226224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.921 [2024-05-13 03:11:44.226239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.921 [2024-05-13 03:11:44.229861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.921 [2024-05-13 03:11:44.238826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.921 [2024-05-13 03:11:44.239365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.239659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.921 [2024-05-13 03:11:44.239687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.921 [2024-05-13 03:11:44.239717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.239959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.240204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.240228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.240243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.243863] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.252817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.253305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.253564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.253592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.922 [2024-05-13 03:11:44.253610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.253861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.254116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.254140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.254155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.257775] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.266733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.267219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.267479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.267504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.922 [2024-05-13 03:11:44.267519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.267792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.268039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.268063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.268078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.271691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.280656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.281169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.281417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.281459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.922 [2024-05-13 03:11:44.281477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.281731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.281977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.282001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.282016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.285625] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.294598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.295111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.295327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.295355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.922 [2024-05-13 03:11:44.295373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.295614] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.295869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.295899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.295914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.299533] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.308508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.309060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.309354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.309382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.922 [2024-05-13 03:11:44.309399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.309640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.309895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.309919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.309934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.313551] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.322561] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.323087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.323357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.323385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.922 [2024-05-13 03:11:44.323403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.323643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.323901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.323926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.323942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.327554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.336508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.337059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.337352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.337381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.922 [2024-05-13 03:11:44.337398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.337639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.337894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.337919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.337940] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.341547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.350513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.350984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.351247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.351275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.922 [2024-05-13 03:11:44.351292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.351533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.351793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.351817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.351833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.355444] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.364405] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.364926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.365193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.365221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.922 [2024-05-13 03:11:44.365239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.922 [2024-05-13 03:11:44.365479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.922 [2024-05-13 03:11:44.365736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.922 [2024-05-13 03:11:44.365760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.922 [2024-05-13 03:11:44.365775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.922 [2024-05-13 03:11:44.369389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.922 [2024-05-13 03:11:44.378355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.922 [2024-05-13 03:11:44.378853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.922 [2024-05-13 03:11:44.379096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.379120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.379135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.379392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.379637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.379661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.379676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.383301] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.923 [2024-05-13 03:11:44.392262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.923 [2024-05-13 03:11:44.392788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.393009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.393040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.393057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.393299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.393544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.393568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.393583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.397209] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.923 [2024-05-13 03:11:44.406171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.923 [2024-05-13 03:11:44.406678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.406927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.406953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.406969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.407228] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.407473] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.407497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.407512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.411137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.923 [2024-05-13 03:11:44.420101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.923 [2024-05-13 03:11:44.420578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.420830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.420859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.420876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.421116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.421362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.421385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.421400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.425024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.923 [2024-05-13 03:11:44.434000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.923 [2024-05-13 03:11:44.434480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.434732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.434775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.434792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.435060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.435305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.435329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.435344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.438973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.923 [2024-05-13 03:11:44.447942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.923 [2024-05-13 03:11:44.448441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.448684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.448723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.448741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.448982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.449227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.449250] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.449265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.452886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.923 [2024-05-13 03:11:44.461868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.923 [2024-05-13 03:11:44.462351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.462598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.462626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.462643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.462896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.463142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.463166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.463181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.466803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.923 [2024-05-13 03:11:44.475775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.923 [2024-05-13 03:11:44.476281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.476552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.476577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.476593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.476877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.477123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.477147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.477162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.480785] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.923 [2024-05-13 03:11:44.489756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.923 [2024-05-13 03:11:44.490263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.490516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.490541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.490556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.490829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.491075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.491099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.491115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.494737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.923 [2024-05-13 03:11:44.503708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.923 [2024-05-13 03:11:44.504189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.504458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.923 [2024-05-13 03:11:44.504485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.923 [2024-05-13 03:11:44.504502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.923 [2024-05-13 03:11:44.504755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.923 [2024-05-13 03:11:44.505002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.923 [2024-05-13 03:11:44.505025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.923 [2024-05-13 03:11:44.505041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.923 [2024-05-13 03:11:44.508653] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.517652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.518156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.518426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.518455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.518471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.518753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.519000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.924 [2024-05-13 03:11:44.519024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.924 [2024-05-13 03:11:44.519039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.924 [2024-05-13 03:11:44.522655] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.531668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.532195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.532444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.532472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.532490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.532743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.532989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.924 [2024-05-13 03:11:44.533012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.924 [2024-05-13 03:11:44.533028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.924 [2024-05-13 03:11:44.536648] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.545632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.546127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.546368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.546396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.546413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.546654] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.546909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.924 [2024-05-13 03:11:44.546934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.924 [2024-05-13 03:11:44.546950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.924 [2024-05-13 03:11:44.550571] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.559543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.560044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.560260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.560288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.560311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.560553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.560814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.924 [2024-05-13 03:11:44.560839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.924 [2024-05-13 03:11:44.560854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.924 [2024-05-13 03:11:44.564469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.573443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.573955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.574217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.574242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.574258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.574520] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.574778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.924 [2024-05-13 03:11:44.574803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.924 [2024-05-13 03:11:44.574818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.924 [2024-05-13 03:11:44.578436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.587402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.587891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.588162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.588190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.588208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.588449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.588705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.924 [2024-05-13 03:11:44.588729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.924 [2024-05-13 03:11:44.588745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.924 [2024-05-13 03:11:44.592362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.601328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.601848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.602081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.602109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.602126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.602373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.602619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.924 [2024-05-13 03:11:44.602643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.924 [2024-05-13 03:11:44.602658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.924 [2024-05-13 03:11:44.606283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.615246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.615758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.616021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.616049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.616066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.616306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.616552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.924 [2024-05-13 03:11:44.616575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.924 [2024-05-13 03:11:44.616591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.924 [2024-05-13 03:11:44.620219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.629185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.629689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.629966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.629994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.630011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.630251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.630497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.924 [2024-05-13 03:11:44.630520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.924 [2024-05-13 03:11:44.630536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.924 [2024-05-13 03:11:44.634161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.924 [2024-05-13 03:11:44.643130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.924 [2024-05-13 03:11:44.643632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.643903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.924 [2024-05-13 03:11:44.643929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.924 [2024-05-13 03:11:44.643945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.924 [2024-05-13 03:11:44.644209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.924 [2024-05-13 03:11:44.644460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.925 [2024-05-13 03:11:44.644483] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.925 [2024-05-13 03:11:44.644499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.925 [2024-05-13 03:11:44.648125] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.925 [2024-05-13 03:11:44.657095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.925 [2024-05-13 03:11:44.657574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.925 [2024-05-13 03:11:44.657818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.925 [2024-05-13 03:11:44.657844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.925 [2024-05-13 03:11:44.657860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.925 [2024-05-13 03:11:44.658126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.925 [2024-05-13 03:11:44.658373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.925 [2024-05-13 03:11:44.658397] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.925 [2024-05-13 03:11:44.658412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.925 [2024-05-13 03:11:44.662038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.925 [2024-05-13 03:11:44.671025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.925 [2024-05-13 03:11:44.671507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.925 [2024-05-13 03:11:44.671758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.925 [2024-05-13 03:11:44.671787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.925 [2024-05-13 03:11:44.671805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.925 [2024-05-13 03:11:44.672046] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.925 [2024-05-13 03:11:44.672291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.925 [2024-05-13 03:11:44.672315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.925 [2024-05-13 03:11:44.672330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.925 [2024-05-13 03:11:44.675960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.925 [2024-05-13 03:11:44.684931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.925 [2024-05-13 03:11:44.685439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.925 [2024-05-13 03:11:44.685711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.925 [2024-05-13 03:11:44.685740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.925 [2024-05-13 03:11:44.685758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.925 [2024-05-13 03:11:44.685999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.925 [2024-05-13 03:11:44.686244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.925 [2024-05-13 03:11:44.686274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.925 [2024-05-13 03:11:44.686289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.925 [2024-05-13 03:11:44.689912] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.925 [2024-05-13 03:11:44.698884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.925 [2024-05-13 03:11:44.699380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.925 [2024-05-13 03:11:44.699621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.925 [2024-05-13 03:11:44.699649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:53.925 [2024-05-13 03:11:44.699666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:53.925 [2024-05-13 03:11:44.699918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:53.925 [2024-05-13 03:11:44.700165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:53.925 [2024-05-13 03:11:44.700189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:53.925 [2024-05-13 03:11:44.700204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.925 [2024-05-13 03:11:44.703833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.925 [2024-05-13 03:11:44.712804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.925 [2024-05-13 03:11:44.713287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.184 [2024-05-13 03:11:44.713527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.184 [2024-05-13 03:11:44.713557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.184 [2024-05-13 03:11:44.713575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.184 [2024-05-13 03:11:44.713829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.184 [2024-05-13 03:11:44.714075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.184 [2024-05-13 03:11:44.714099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.184 [2024-05-13 03:11:44.714115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.184 [2024-05-13 03:11:44.717744] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.184 [2024-05-13 03:11:44.726722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.184 [2024-05-13 03:11:44.727240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.184 [2024-05-13 03:11:44.727478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.184 [2024-05-13 03:11:44.727506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.184 [2024-05-13 03:11:44.727524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.184 [2024-05-13 03:11:44.727776] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.184 [2024-05-13 03:11:44.728022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.184 [2024-05-13 03:11:44.728047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.184 [2024-05-13 03:11:44.728067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.184 [2024-05-13 03:11:44.731684] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.184 [2024-05-13 03:11:44.740723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.184 [2024-05-13 03:11:44.741245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.184 [2024-05-13 03:11:44.741515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.741544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.741561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.741814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.742060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.742084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.742100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.745725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.185 [2024-05-13 03:11:44.754718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.185 [2024-05-13 03:11:44.755369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.755656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.755685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.755712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.755956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.756202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.756226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.756242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.759869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.185 [2024-05-13 03:11:44.768637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.185 [2024-05-13 03:11:44.769126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.769351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.769379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.769396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.769637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.769893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.769918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.769933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.773550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.185 [2024-05-13 03:11:44.782542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.185 [2024-05-13 03:11:44.783055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.783287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.783316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.783333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.783574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.783833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.783858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.783874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.787491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.185 [2024-05-13 03:11:44.796477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.185 [2024-05-13 03:11:44.796969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.797211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.797239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.797256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.797497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.797756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.797782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.797797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.801411] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.185 [2024-05-13 03:11:44.810381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.185 [2024-05-13 03:11:44.810863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.811130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.811159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.811176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.811417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.811662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.811686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.811711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.815326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.185 [2024-05-13 03:11:44.824299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.185 [2024-05-13 03:11:44.824774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.825036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.825062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.825078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.825341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.825587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.825611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.825626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.829251] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.185 [2024-05-13 03:11:44.838230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.185 [2024-05-13 03:11:44.838733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.838976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.839002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.839018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.839277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.839523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.839547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.839562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.843191] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.185 [2024-05-13 03:11:44.852168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.185 [2024-05-13 03:11:44.852679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.852969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.852998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.853015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.853256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.853501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.853525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.853540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.857165] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.185 [2024-05-13 03:11:44.866144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.185 [2024-05-13 03:11:44.866655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.866914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.185 [2024-05-13 03:11:44.866944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.185 [2024-05-13 03:11:44.866961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.185 [2024-05-13 03:11:44.867202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.185 [2024-05-13 03:11:44.867447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.185 [2024-05-13 03:11:44.867471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.185 [2024-05-13 03:11:44.867486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.185 [2024-05-13 03:11:44.871116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.186 [2024-05-13 03:11:44.880098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.186 [2024-05-13 03:11:44.880576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.880847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.880873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.186 [2024-05-13 03:11:44.880889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.186 [2024-05-13 03:11:44.881157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.186 [2024-05-13 03:11:44.881402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.186 [2024-05-13 03:11:44.881426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.186 [2024-05-13 03:11:44.881441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.186 [2024-05-13 03:11:44.885066] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.186 [2024-05-13 03:11:44.894033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.186 [2024-05-13 03:11:44.894483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.894755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.894785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.186 [2024-05-13 03:11:44.894802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.186 [2024-05-13 03:11:44.895044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.186 [2024-05-13 03:11:44.895289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.186 [2024-05-13 03:11:44.895313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.186 [2024-05-13 03:11:44.895328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.186 [2024-05-13 03:11:44.898957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.186 [2024-05-13 03:11:44.907943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.186 [2024-05-13 03:11:44.908422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.908653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.908681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.186 [2024-05-13 03:11:44.908715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.186 [2024-05-13 03:11:44.908959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.186 [2024-05-13 03:11:44.909204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.186 [2024-05-13 03:11:44.909227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.186 [2024-05-13 03:11:44.909243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.186 [2024-05-13 03:11:44.912870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.186 [2024-05-13 03:11:44.921846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.186 [2024-05-13 03:11:44.922480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.922767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.922796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.186 [2024-05-13 03:11:44.922813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.186 [2024-05-13 03:11:44.923053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.186 [2024-05-13 03:11:44.923298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.186 [2024-05-13 03:11:44.923322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.186 [2024-05-13 03:11:44.923337] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.186 [2024-05-13 03:11:44.926958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.186 [2024-05-13 03:11:44.935930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.186 [2024-05-13 03:11:44.936432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.936674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.936712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.186 [2024-05-13 03:11:44.936732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.186 [2024-05-13 03:11:44.936972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.186 [2024-05-13 03:11:44.937218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.186 [2024-05-13 03:11:44.937241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.186 [2024-05-13 03:11:44.937257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.186 [2024-05-13 03:11:44.940921] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.186 [2024-05-13 03:11:44.949894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.186 [2024-05-13 03:11:44.950398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.950679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.950717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.186 [2024-05-13 03:11:44.950736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.186 [2024-05-13 03:11:44.950983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.186 [2024-05-13 03:11:44.951229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.186 [2024-05-13 03:11:44.951253] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.186 [2024-05-13 03:11:44.951269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.186 [2024-05-13 03:11:44.954889] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.186 [2024-05-13 03:11:44.963861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.186 [2024-05-13 03:11:44.964551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.964865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.964895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.186 [2024-05-13 03:11:44.964912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.186 [2024-05-13 03:11:44.965153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.186 [2024-05-13 03:11:44.965398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.186 [2024-05-13 03:11:44.965422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.186 [2024-05-13 03:11:44.965437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.186 [2024-05-13 03:11:44.969064] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.186 [2024-05-13 03:11:44.977829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.186 [2024-05-13 03:11:44.978343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.978610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.186 [2024-05-13 03:11:44.978639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.186 [2024-05-13 03:11:44.978656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.186 [2024-05-13 03:11:44.978907] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.186 [2024-05-13 03:11:44.979153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.186 [2024-05-13 03:11:44.979177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.186 [2024-05-13 03:11:44.979193] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.186 [2024-05-13 03:11:44.982816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.445 [2024-05-13 03:11:44.991792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.445 [2024-05-13 03:11:44.992302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.445 [2024-05-13 03:11:44.992580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.445 [2024-05-13 03:11:44.992608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.445 [2024-05-13 03:11:44.992625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.445 [2024-05-13 03:11:44.992877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.445 [2024-05-13 03:11:44.993129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.445 [2024-05-13 03:11:44.993152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.445 [2024-05-13 03:11:44.993167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.445 [2024-05-13 03:11:44.996791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.445 [2024-05-13 03:11:45.005762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.445 [2024-05-13 03:11:45.006390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.445 [2024-05-13 03:11:45.006678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.445 [2024-05-13 03:11:45.006716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.445 [2024-05-13 03:11:45.006735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.445 [2024-05-13 03:11:45.006976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.445 [2024-05-13 03:11:45.007222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.445 [2024-05-13 03:11:45.007246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.445 [2024-05-13 03:11:45.007261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.445 [2024-05-13 03:11:45.010883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.445 [2024-05-13 03:11:45.019854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.445 [2024-05-13 03:11:45.020368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.445 [2024-05-13 03:11:45.020606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.445 [2024-05-13 03:11:45.020631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.445 [2024-05-13 03:11:45.020646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.445 [2024-05-13 03:11:45.020925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.445 [2024-05-13 03:11:45.021172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.445 [2024-05-13 03:11:45.021196] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.445 [2024-05-13 03:11:45.021211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.024836] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.033888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.034400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.034664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.034690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.034716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.034969] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.446 [2024-05-13 03:11:45.035214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.446 [2024-05-13 03:11:45.035244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.446 [2024-05-13 03:11:45.035259] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.038886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.047867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.048374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.048640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.048669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.048686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.048938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.446 [2024-05-13 03:11:45.049185] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.446 [2024-05-13 03:11:45.049208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.446 [2024-05-13 03:11:45.049224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.052855] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.061827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.062280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.062638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.062704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.062724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.062965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.446 [2024-05-13 03:11:45.063210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.446 [2024-05-13 03:11:45.063234] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.446 [2024-05-13 03:11:45.063250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.066875] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.075852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.076334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.076577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.076607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.076624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.076876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.446 [2024-05-13 03:11:45.077133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.446 [2024-05-13 03:11:45.077158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.446 [2024-05-13 03:11:45.077178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.080810] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.089876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.090367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.090581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.090608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.090625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.090880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.446 [2024-05-13 03:11:45.091127] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.446 [2024-05-13 03:11:45.091151] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.446 [2024-05-13 03:11:45.091167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.094792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.103765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.104243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.104510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.104538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.104555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.104807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.446 [2024-05-13 03:11:45.105054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.446 [2024-05-13 03:11:45.105077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.446 [2024-05-13 03:11:45.105093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.108716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.117673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.118170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.118562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.118590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.118608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.118878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.446 [2024-05-13 03:11:45.119129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.446 [2024-05-13 03:11:45.119153] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.446 [2024-05-13 03:11:45.119168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.122798] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.131772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.132273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.132518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.132545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.132562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.132815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.446 [2024-05-13 03:11:45.133061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.446 [2024-05-13 03:11:45.133085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.446 [2024-05-13 03:11:45.133100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.136723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.145691] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.146193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.146463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.146492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.146509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.146764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.446 [2024-05-13 03:11:45.147011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.446 [2024-05-13 03:11:45.147034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.446 [2024-05-13 03:11:45.147049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.446 [2024-05-13 03:11:45.150714] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.446 [2024-05-13 03:11:45.159679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.446 [2024-05-13 03:11:45.160185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.160456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.446 [2024-05-13 03:11:45.160485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.446 [2024-05-13 03:11:45.160502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.446 [2024-05-13 03:11:45.160755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.447 [2024-05-13 03:11:45.161002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.447 [2024-05-13 03:11:45.161026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.447 [2024-05-13 03:11:45.161041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.447 [2024-05-13 03:11:45.164659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.447 [2024-05-13 03:11:45.173637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.447 [2024-05-13 03:11:45.174130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.174349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.174374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.447 [2024-05-13 03:11:45.174390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.447 [2024-05-13 03:11:45.174651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.447 [2024-05-13 03:11:45.174907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.447 [2024-05-13 03:11:45.174931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.447 [2024-05-13 03:11:45.174946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.447 [2024-05-13 03:11:45.178561] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.447 [2024-05-13 03:11:45.187527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.447 [2024-05-13 03:11:45.188032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.188289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.188317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.447 [2024-05-13 03:11:45.188334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.447 [2024-05-13 03:11:45.188576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.447 [2024-05-13 03:11:45.188834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.447 [2024-05-13 03:11:45.188859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.447 [2024-05-13 03:11:45.188874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.447 [2024-05-13 03:11:45.192490] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.447 [2024-05-13 03:11:45.201458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.447 [2024-05-13 03:11:45.201974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.202224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.202266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.447 [2024-05-13 03:11:45.202283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.447 [2024-05-13 03:11:45.202524] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.447 [2024-05-13 03:11:45.202784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.447 [2024-05-13 03:11:45.202809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.447 [2024-05-13 03:11:45.202824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.447 [2024-05-13 03:11:45.206439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.447 [2024-05-13 03:11:45.215416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.447 [2024-05-13 03:11:45.215912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.216150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.216179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.447 [2024-05-13 03:11:45.216197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.447 [2024-05-13 03:11:45.216438] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.447 [2024-05-13 03:11:45.216684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.447 [2024-05-13 03:11:45.216720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.447 [2024-05-13 03:11:45.216737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.447 [2024-05-13 03:11:45.220353] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.447 [2024-05-13 03:11:45.229329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.447 [2024-05-13 03:11:45.229840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.230040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.230065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.447 [2024-05-13 03:11:45.230080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.447 [2024-05-13 03:11:45.230335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.447 [2024-05-13 03:11:45.230582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.447 [2024-05-13 03:11:45.230605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.447 [2024-05-13 03:11:45.230621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.447 [2024-05-13 03:11:45.234244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.447 [2024-05-13 03:11:45.243212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.447 [2024-05-13 03:11:45.243709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.243975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.447 [2024-05-13 03:11:45.244003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.447 [2024-05-13 03:11:45.244020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.447 [2024-05-13 03:11:45.244262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.447 [2024-05-13 03:11:45.244508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.447 [2024-05-13 03:11:45.244531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.447 [2024-05-13 03:11:45.244547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.707 [2024-05-13 03:11:45.248173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.707 [2024-05-13 03:11:45.257138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.707 [2024-05-13 03:11:45.257634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.257972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.258008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.707 [2024-05-13 03:11:45.258026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.707 [2024-05-13 03:11:45.258267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.707 [2024-05-13 03:11:45.258512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.707 [2024-05-13 03:11:45.258535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.707 [2024-05-13 03:11:45.258551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.707 [2024-05-13 03:11:45.262190] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.707 [2024-05-13 03:11:45.271148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.707 [2024-05-13 03:11:45.271676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.271920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.271951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.707 [2024-05-13 03:11:45.271968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.707 [2024-05-13 03:11:45.272210] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.707 [2024-05-13 03:11:45.272454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.707 [2024-05-13 03:11:45.272478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.707 [2024-05-13 03:11:45.272493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.707 [2024-05-13 03:11:45.276127] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.707 [2024-05-13 03:11:45.285090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.707 [2024-05-13 03:11:45.285567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.285833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.285862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.707 [2024-05-13 03:11:45.285880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.707 [2024-05-13 03:11:45.286121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.707 [2024-05-13 03:11:45.286366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.707 [2024-05-13 03:11:45.286389] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.707 [2024-05-13 03:11:45.286405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.707 [2024-05-13 03:11:45.290027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.707 [2024-05-13 03:11:45.298998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.707 [2024-05-13 03:11:45.299509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.299768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.299797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.707 [2024-05-13 03:11:45.299819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.707 [2024-05-13 03:11:45.300062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.707 [2024-05-13 03:11:45.300308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.707 [2024-05-13 03:11:45.300332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.707 [2024-05-13 03:11:45.300347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.707 [2024-05-13 03:11:45.303971] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.707 [2024-05-13 03:11:45.312931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.707 [2024-05-13 03:11:45.313465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.313755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.313784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.707 [2024-05-13 03:11:45.313801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.707 [2024-05-13 03:11:45.314042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.707 [2024-05-13 03:11:45.314287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.707 [2024-05-13 03:11:45.314311] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.707 [2024-05-13 03:11:45.314327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.707 [2024-05-13 03:11:45.317952] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.707 [2024-05-13 03:11:45.326923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.707 [2024-05-13 03:11:45.327404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.707 [2024-05-13 03:11:45.327685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.327722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.327740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.327980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.328226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.328249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.328265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.331896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.708 [2024-05-13 03:11:45.340865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.708 [2024-05-13 03:11:45.341349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.341643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.341671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.341688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.341956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.342221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.342245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.342261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.345849] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.708 [2024-05-13 03:11:45.354791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.708 [2024-05-13 03:11:45.355350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.355632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.355661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.355678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.355921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.356185] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.356209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.356225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.359883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.708 [2024-05-13 03:11:45.368845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.708 [2024-05-13 03:11:45.369365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.369613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.369641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.369658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.369921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.370176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.370200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.370216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.373842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.708 [2024-05-13 03:11:45.382807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.708 [2024-05-13 03:11:45.383296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.383618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.383647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.383664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.383913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.384165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.384189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.384205] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.387824] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.708 [2024-05-13 03:11:45.396791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.708 [2024-05-13 03:11:45.397341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.397620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.397648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.397665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.397927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.398166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.398189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.398203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.401669] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.708 [2024-05-13 03:11:45.410386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.708 [2024-05-13 03:11:45.410811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.411065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.411090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.411106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.411334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.411548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.411569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.411582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.414671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.708 [2024-05-13 03:11:45.423612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.708 [2024-05-13 03:11:45.424132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.424383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.424408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.424424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.424663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.424899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.424925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.424939] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.427957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.708 [2024-05-13 03:11:45.437184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.708 [2024-05-13 03:11:45.437653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.437900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.437926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.437942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.438194] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.438395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.438414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.438427] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.441440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.708 [2024-05-13 03:11:45.450553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.708 [2024-05-13 03:11:45.451069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.451304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.708 [2024-05-13 03:11:45.451329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.708 [2024-05-13 03:11:45.451344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.708 [2024-05-13 03:11:45.451580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.708 [2024-05-13 03:11:45.451828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.708 [2024-05-13 03:11:45.451850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.708 [2024-05-13 03:11:45.451863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.708 [2024-05-13 03:11:45.454903] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.709 [2024-05-13 03:11:45.463962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.709 [2024-05-13 03:11:45.464433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.709 [2024-05-13 03:11:45.464669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.709 [2024-05-13 03:11:45.464702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.709 [2024-05-13 03:11:45.464720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.709 [2024-05-13 03:11:45.464952] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.709 [2024-05-13 03:11:45.465171] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.709 [2024-05-13 03:11:45.465191] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.709 [2024-05-13 03:11:45.465209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.709 [2024-05-13 03:11:45.468226] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.709 [2024-05-13 03:11:45.477310] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.709 [2024-05-13 03:11:45.477819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.709 [2024-05-13 03:11:45.478047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.709 [2024-05-13 03:11:45.478073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.709 [2024-05-13 03:11:45.478089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.709 [2024-05-13 03:11:45.478348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.709 [2024-05-13 03:11:45.478550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.709 [2024-05-13 03:11:45.478569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.709 [2024-05-13 03:11:45.478582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.709 [2024-05-13 03:11:45.481747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.709 [2024-05-13 03:11:45.490749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.709 [2024-05-13 03:11:45.491457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.709 [2024-05-13 03:11:45.491772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.709 [2024-05-13 03:11:45.491801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.709 [2024-05-13 03:11:45.491818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.709 [2024-05-13 03:11:45.492081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.709 [2024-05-13 03:11:45.492283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.709 [2024-05-13 03:11:45.492303] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.709 [2024-05-13 03:11:45.492316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.709 [2024-05-13 03:11:45.495373] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.709 [2024-05-13 03:11:45.504145] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.709 [2024-05-13 03:11:45.504669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.709 [2024-05-13 03:11:45.504901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.709 [2024-05-13 03:11:45.504927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.709 [2024-05-13 03:11:45.504943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.709 [2024-05-13 03:11:45.505198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.709 [2024-05-13 03:11:45.505400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.709 [2024-05-13 03:11:45.505434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.709 [2024-05-13 03:11:45.505447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.969 [2024-05-13 03:11:45.508818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.969 [2024-05-13 03:11:45.517545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.969 [2024-05-13 03:11:45.518028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.518261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.518287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.969 [2024-05-13 03:11:45.518302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.969 [2024-05-13 03:11:45.518519] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.969 [2024-05-13 03:11:45.518748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.969 [2024-05-13 03:11:45.518784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.969 [2024-05-13 03:11:45.518798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.969 [2024-05-13 03:11:45.521826] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.969 [2024-05-13 03:11:45.530858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.969 [2024-05-13 03:11:45.531374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.531629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.531669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.969 [2024-05-13 03:11:45.531684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.969 [2024-05-13 03:11:45.531938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.969 [2024-05-13 03:11:45.532160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.969 [2024-05-13 03:11:45.532180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.969 [2024-05-13 03:11:45.532192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.969 [2024-05-13 03:11:45.535214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.969 [2024-05-13 03:11:45.544125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.969 [2024-05-13 03:11:45.544526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.544766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.544793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.969 [2024-05-13 03:11:45.544809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.969 [2024-05-13 03:11:45.545053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.969 [2024-05-13 03:11:45.545254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.969 [2024-05-13 03:11:45.545273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.969 [2024-05-13 03:11:45.545286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.969 [2024-05-13 03:11:45.548486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.969 [2024-05-13 03:11:45.557607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.969 [2024-05-13 03:11:45.558130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.558416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.558441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.969 [2024-05-13 03:11:45.558457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.969 [2024-05-13 03:11:45.558709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.969 [2024-05-13 03:11:45.558939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.969 [2024-05-13 03:11:45.558960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.969 [2024-05-13 03:11:45.558974] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.969 [2024-05-13 03:11:45.562136] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.969 [2024-05-13 03:11:45.571038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.969 [2024-05-13 03:11:45.571514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.571751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.571780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.969 [2024-05-13 03:11:45.571796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.969 [2024-05-13 03:11:45.572058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.969 [2024-05-13 03:11:45.572260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.969 [2024-05-13 03:11:45.572280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.969 [2024-05-13 03:11:45.572292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.969 [2024-05-13 03:11:45.575345] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.969 [2024-05-13 03:11:45.584261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.969 [2024-05-13 03:11:45.584693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.584956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.584982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.969 [2024-05-13 03:11:45.584998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.969 [2024-05-13 03:11:45.585236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.969 [2024-05-13 03:11:45.585438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.969 [2024-05-13 03:11:45.585458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.969 [2024-05-13 03:11:45.585470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.969 [2024-05-13 03:11:45.588531] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.969 [2024-05-13 03:11:45.597634] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.969 [2024-05-13 03:11:45.598135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.598391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.598417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.969 [2024-05-13 03:11:45.598433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.969 [2024-05-13 03:11:45.598693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.969 [2024-05-13 03:11:45.598938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.969 [2024-05-13 03:11:45.598960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.969 [2024-05-13 03:11:45.598973] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.969 [2024-05-13 03:11:45.602016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.969 [2024-05-13 03:11:45.610903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.969 [2024-05-13 03:11:45.611400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.611647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.611672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.969 [2024-05-13 03:11:45.611688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.969 [2024-05-13 03:11:45.611928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.969 [2024-05-13 03:11:45.612151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.969 [2024-05-13 03:11:45.612171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.969 [2024-05-13 03:11:45.612183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.969 [2024-05-13 03:11:45.615201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.969 [2024-05-13 03:11:45.624299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.969 [2024-05-13 03:11:45.624780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.625023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.969 [2024-05-13 03:11:45.625048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.625064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.625304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.625506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.625525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.625537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.628581] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.637487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.970 [2024-05-13 03:11:45.637999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.638309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.638334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.638355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.638611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.638858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.638880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.638894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.641915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.650815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.970 [2024-05-13 03:11:45.651289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.651591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.651615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.651630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.651888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.652113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.652133] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.652146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.655163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.664058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.970 [2024-05-13 03:11:45.664548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.664797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.664823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.664839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.665083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.665300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.665320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.665332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.668350] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.677436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.970 [2024-05-13 03:11:45.677867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.678203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.678254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.678271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.678494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.678721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.678742] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.678755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.681802] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.690686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.970 [2024-05-13 03:11:45.691184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.691453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.691479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.691494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.691777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.692014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.692035] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.692048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.695082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.704033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.970 [2024-05-13 03:11:45.704496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.704751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.704776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.704792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.705017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.705218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.705237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.705250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.708265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.717368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.970 [2024-05-13 03:11:45.717863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.718100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.718125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.718140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.718398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.718604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.718624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.718636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.721678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.730618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.970 [2024-05-13 03:11:45.731039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.731262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.731286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.731301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.731521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.731748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.731784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.731799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.734883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.744028] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.970 [2024-05-13 03:11:45.744530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.744744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.970 [2024-05-13 03:11:45.744771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.970 [2024-05-13 03:11:45.744786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.970 [2024-05-13 03:11:45.745016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.970 [2024-05-13 03:11:45.745233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.970 [2024-05-13 03:11:45.745253] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.970 [2024-05-13 03:11:45.745265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.970 [2024-05-13 03:11:45.748442] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.970 [2024-05-13 03:11:45.757247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.971 [2024-05-13 03:11:45.757674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.971 [2024-05-13 03:11:45.757950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.971 [2024-05-13 03:11:45.757975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:54.971 [2024-05-13 03:11:45.757991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:54.971 [2024-05-13 03:11:45.758248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:54.971 [2024-05-13 03:11:45.758449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.971 [2024-05-13 03:11:45.758473] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.971 [2024-05-13 03:11:45.758487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.971 [2024-05-13 03:11:45.761544] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.231 [2024-05-13 03:11:45.771029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.231 [2024-05-13 03:11:45.771448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.771723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.771750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.231 [2024-05-13 03:11:45.771765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.231 [2024-05-13 03:11:45.771982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.231 [2024-05-13 03:11:45.772219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.231 [2024-05-13 03:11:45.772239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.231 [2024-05-13 03:11:45.772251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.231 [2024-05-13 03:11:45.775634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.231 [2024-05-13 03:11:45.784316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.231 [2024-05-13 03:11:45.784812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.785071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.785097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.231 [2024-05-13 03:11:45.785113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.231 [2024-05-13 03:11:45.785368] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.231 [2024-05-13 03:11:45.785569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.231 [2024-05-13 03:11:45.785588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.231 [2024-05-13 03:11:45.785601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.231 [2024-05-13 03:11:45.788627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.231 [2024-05-13 03:11:45.797540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.231 [2024-05-13 03:11:45.798060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.798289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.798315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.231 [2024-05-13 03:11:45.798330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.231 [2024-05-13 03:11:45.798555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.231 [2024-05-13 03:11:45.798814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.231 [2024-05-13 03:11:45.798836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.231 [2024-05-13 03:11:45.798855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.231 [2024-05-13 03:11:45.802018] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.231 [2024-05-13 03:11:45.811015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.231 [2024-05-13 03:11:45.811478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.811737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.811778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.231 [2024-05-13 03:11:45.811793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.231 [2024-05-13 03:11:45.812014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.231 [2024-05-13 03:11:45.812231] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.231 [2024-05-13 03:11:45.812266] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.231 [2024-05-13 03:11:45.812279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.231 [2024-05-13 03:11:45.815398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.231 [2024-05-13 03:11:45.824272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.231 [2024-05-13 03:11:45.824762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.825019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.825044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.231 [2024-05-13 03:11:45.825059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.231 [2024-05-13 03:11:45.825283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.231 [2024-05-13 03:11:45.825498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.231 [2024-05-13 03:11:45.825518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.231 [2024-05-13 03:11:45.825531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.231 [2024-05-13 03:11:45.828666] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.231 [2024-05-13 03:11:45.837593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.231 [2024-05-13 03:11:45.838088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.838477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.838501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.231 [2024-05-13 03:11:45.838516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.231 [2024-05-13 03:11:45.838762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.231 [2024-05-13 03:11:45.838992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.231 [2024-05-13 03:11:45.839013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.231 [2024-05-13 03:11:45.839026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.231 [2024-05-13 03:11:45.842178] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.231 [2024-05-13 03:11:45.850991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.231 [2024-05-13 03:11:45.851462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.851711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.851737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.231 [2024-05-13 03:11:45.851753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.231 [2024-05-13 03:11:45.851969] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.231 [2024-05-13 03:11:45.852205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.231 [2024-05-13 03:11:45.852225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.231 [2024-05-13 03:11:45.852238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.231 [2024-05-13 03:11:45.855254] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.231 [2024-05-13 03:11:45.864355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.231 [2024-05-13 03:11:45.864843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.865071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.231 [2024-05-13 03:11:45.865094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.231 [2024-05-13 03:11:45.865109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.231 [2024-05-13 03:11:45.865343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.231 [2024-05-13 03:11:45.865544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.231 [2024-05-13 03:11:45.865564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.231 [2024-05-13 03:11:45.865577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.868616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.877765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.878237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.878467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.878495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.878525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.878786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.232 [2024-05-13 03:11:45.878994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.232 [2024-05-13 03:11:45.879014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.232 [2024-05-13 03:11:45.879027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.882101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.891158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.891653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.891933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.891959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.891974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.892228] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.232 [2024-05-13 03:11:45.892429] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.232 [2024-05-13 03:11:45.892450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.232 [2024-05-13 03:11:45.892463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.895528] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.904596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.905083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.905337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.905362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.905378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.905599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.232 [2024-05-13 03:11:45.905850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.232 [2024-05-13 03:11:45.905872] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.232 [2024-05-13 03:11:45.905886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.908893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.917969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.918415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.918636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.918661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.918676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.918938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.232 [2024-05-13 03:11:45.919157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.232 [2024-05-13 03:11:45.919177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.232 [2024-05-13 03:11:45.919189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.922240] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.931281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.931749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.931982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.932007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.932022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.932244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.232 [2024-05-13 03:11:45.932460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.232 [2024-05-13 03:11:45.932480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.232 [2024-05-13 03:11:45.932493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.935516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.944628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.945147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.945392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.945419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.945434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.945652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.232 [2024-05-13 03:11:45.945901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.232 [2024-05-13 03:11:45.945923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.232 [2024-05-13 03:11:45.945936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.948971] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.957915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.958404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.958702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.958743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.958759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.959000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.232 [2024-05-13 03:11:45.959218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.232 [2024-05-13 03:11:45.959238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.232 [2024-05-13 03:11:45.959251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.962308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.971354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.971812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.972094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.972125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.972141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.972400] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.232 [2024-05-13 03:11:45.972601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.232 [2024-05-13 03:11:45.972620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.232 [2024-05-13 03:11:45.972633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.975778] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.984732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.985239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.985472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.985496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.985511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.985772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.232 [2024-05-13 03:11:45.985994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.232 [2024-05-13 03:11:45.986030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.232 [2024-05-13 03:11:45.986044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.232 [2024-05-13 03:11:45.989173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.232 [2024-05-13 03:11:45.998153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.232 [2024-05-13 03:11:45.998645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.998885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.232 [2024-05-13 03:11:45.998911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.232 [2024-05-13 03:11:45.998927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.232 [2024-05-13 03:11:45.999183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.233 [2024-05-13 03:11:45.999384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.233 [2024-05-13 03:11:45.999404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.233 [2024-05-13 03:11:45.999417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.233 [2024-05-13 03:11:46.002475] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.233 [2024-05-13 03:11:46.011576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.233 [2024-05-13 03:11:46.012074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.233 [2024-05-13 03:11:46.012463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.233 [2024-05-13 03:11:46.012489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.233 [2024-05-13 03:11:46.012510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.233 [2024-05-13 03:11:46.012777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.233 [2024-05-13 03:11:46.013005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.233 [2024-05-13 03:11:46.013041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.233 [2024-05-13 03:11:46.013053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.233 [2024-05-13 03:11:46.016099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.233 [2024-05-13 03:11:46.024993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.233 [2024-05-13 03:11:46.025482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.233 [2024-05-13 03:11:46.025745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.233 [2024-05-13 03:11:46.025771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.233 [2024-05-13 03:11:46.025787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.233 [2024-05-13 03:11:46.026032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.233 [2024-05-13 03:11:46.026248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.233 [2024-05-13 03:11:46.026268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.233 [2024-05-13 03:11:46.026281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.233 [2024-05-13 03:11:46.029615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.492 [2024-05-13 03:11:46.038716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.492 [2024-05-13 03:11:46.039202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.492 [2024-05-13 03:11:46.039454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.492 [2024-05-13 03:11:46.039480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.492 [2024-05-13 03:11:46.039495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.492 [2024-05-13 03:11:46.039744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.039971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.039991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.040004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.043173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.052176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.052670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.052996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.053023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.493 [2024-05-13 03:11:46.053054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.493 [2024-05-13 03:11:46.053294] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.053496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.053515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.053528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.056737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.065702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.066185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.066405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.066431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.493 [2024-05-13 03:11:46.066446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.493 [2024-05-13 03:11:46.066662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.066921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.066944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.066958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.070046] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.078994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.079501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.079732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.079759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.493 [2024-05-13 03:11:46.079775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.493 [2024-05-13 03:11:46.080005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.080222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.080242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.080254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.083276] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.092340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.092833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.093065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.093092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.493 [2024-05-13 03:11:46.093123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.493 [2024-05-13 03:11:46.093358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.093565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.093585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.093597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.096657] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.105737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.106232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.106517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.106543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.493 [2024-05-13 03:11:46.106558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.493 [2024-05-13 03:11:46.106831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.107062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.107083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.107096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.110132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.119032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.119517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.119890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.119917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.493 [2024-05-13 03:11:46.119932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.493 [2024-05-13 03:11:46.120186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.120387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.120407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.120419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.123733] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.132339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.132820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.133058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.133098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.493 [2024-05-13 03:11:46.133114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.493 [2024-05-13 03:11:46.133348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.133550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.133575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.133588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.136643] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.145701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.146444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.146759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.146788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.493 [2024-05-13 03:11:46.146805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.493 [2024-05-13 03:11:46.147066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.147270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.147290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.147302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.150324] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.159097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.159531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.159789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.159815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.493 [2024-05-13 03:11:46.159831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.493 [2024-05-13 03:11:46.160077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.493 [2024-05-13 03:11:46.160279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.493 [2024-05-13 03:11:46.160298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.493 [2024-05-13 03:11:46.160311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.493 [2024-05-13 03:11:46.163327] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.493 [2024-05-13 03:11:46.172389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.493 [2024-05-13 03:11:46.172894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.173150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.493 [2024-05-13 03:11:46.173173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.173188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.494 [2024-05-13 03:11:46.173404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.494 [2024-05-13 03:11:46.173605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.494 [2024-05-13 03:11:46.173624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.494 [2024-05-13 03:11:46.173645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.494 [2024-05-13 03:11:46.176663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.494 [2024-05-13 03:11:46.185800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.494 [2024-05-13 03:11:46.186309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.186566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.186591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.186606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.494 [2024-05-13 03:11:46.186857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.494 [2024-05-13 03:11:46.187098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.494 [2024-05-13 03:11:46.187118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.494 [2024-05-13 03:11:46.187130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.494 [2024-05-13 03:11:46.190195] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.494 [2024-05-13 03:11:46.199098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.494 [2024-05-13 03:11:46.199605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.199899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.199926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.199942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.494 [2024-05-13 03:11:46.200197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.494 [2024-05-13 03:11:46.200399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.494 [2024-05-13 03:11:46.200418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.494 [2024-05-13 03:11:46.200430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.494 [2024-05-13 03:11:46.203448] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.494 [2024-05-13 03:11:46.212390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.494 [2024-05-13 03:11:46.212873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.213102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.213128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.213159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.494 [2024-05-13 03:11:46.213393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.494 [2024-05-13 03:11:46.213595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.494 [2024-05-13 03:11:46.213614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.494 [2024-05-13 03:11:46.213626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.494 [2024-05-13 03:11:46.216692] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.494 [2024-05-13 03:11:46.225636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.494 [2024-05-13 03:11:46.226131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.226311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.226336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.226351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.494 [2024-05-13 03:11:46.226587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.494 [2024-05-13 03:11:46.226830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.494 [2024-05-13 03:11:46.226852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.494 [2024-05-13 03:11:46.226864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.494 [2024-05-13 03:11:46.229892] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.494 [2024-05-13 03:11:46.239021] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.494 [2024-05-13 03:11:46.239498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.239803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.239828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.239844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.494 [2024-05-13 03:11:46.240081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.494 [2024-05-13 03:11:46.240282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.494 [2024-05-13 03:11:46.240301] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.494 [2024-05-13 03:11:46.240314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.494 [2024-05-13 03:11:46.243379] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.494 [2024-05-13 03:11:46.252289] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.494 [2024-05-13 03:11:46.252704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.252994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.253020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.253035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.494 [2024-05-13 03:11:46.253289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.494 [2024-05-13 03:11:46.253491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.494 [2024-05-13 03:11:46.253510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.494 [2024-05-13 03:11:46.253522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.494 [2024-05-13 03:11:46.256567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.494 [2024-05-13 03:11:46.265647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.494 [2024-05-13 03:11:46.266168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.266389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.266414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.266428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.494 [2024-05-13 03:11:46.266647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.494 [2024-05-13 03:11:46.266897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.494 [2024-05-13 03:11:46.266919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.494 [2024-05-13 03:11:46.266932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.494 [2024-05-13 03:11:46.269970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.494 [2024-05-13 03:11:46.279058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.494 [2024-05-13 03:11:46.279544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.279813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.279839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.279854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.494 [2024-05-13 03:11:46.280112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.494 [2024-05-13 03:11:46.280314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.494 [2024-05-13 03:11:46.280334] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.494 [2024-05-13 03:11:46.280346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.494 [2024-05-13 03:11:46.283365] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.494 [2024-05-13 03:11:46.292660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.494 [2024-05-13 03:11:46.293168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.293394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.494 [2024-05-13 03:11:46.293420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.494 [2024-05-13 03:11:46.293435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.754 [2024-05-13 03:11:46.293667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.754 [2024-05-13 03:11:46.293914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.754 [2024-05-13 03:11:46.293936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.754 [2024-05-13 03:11:46.293949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.754 [2024-05-13 03:11:46.297055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.754 [2024-05-13 03:11:46.306056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.754 [2024-05-13 03:11:46.306546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-05-13 03:11:46.306807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-05-13 03:11:46.306834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.754 [2024-05-13 03:11:46.306849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.754 [2024-05-13 03:11:46.307106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.754 [2024-05-13 03:11:46.307307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.754 [2024-05-13 03:11:46.307327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.754 [2024-05-13 03:11:46.307339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.754 [2024-05-13 03:11:46.310466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.754 [2024-05-13 03:11:46.319461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.754 [2024-05-13 03:11:46.319970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-05-13 03:11:46.320197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-05-13 03:11:46.320222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.754 [2024-05-13 03:11:46.320253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.754 [2024-05-13 03:11:46.320492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.754 [2024-05-13 03:11:46.320714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.754 [2024-05-13 03:11:46.320736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.754 [2024-05-13 03:11:46.320749] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.754 [2024-05-13 03:11:46.323880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.754 [2024-05-13 03:11:46.332857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.754 [2024-05-13 03:11:46.333384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-05-13 03:11:46.333629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-05-13 03:11:46.333668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.754 [2024-05-13 03:11:46.333683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.754 [2024-05-13 03:11:46.333892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.754 [2024-05-13 03:11:46.334113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.754 [2024-05-13 03:11:46.334133] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.754 [2024-05-13 03:11:46.334145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.754 [2024-05-13 03:11:46.337201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.754 [2024-05-13 03:11:46.346164] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.346650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.346915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.346940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.346961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.347217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.347419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.347438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.755 [2024-05-13 03:11:46.347451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.755 [2024-05-13 03:11:46.350389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.755 [2024-05-13 03:11:46.360023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.360651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.360913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.360942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.360959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.361213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.361460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.361484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.755 [2024-05-13 03:11:46.361500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.755 [2024-05-13 03:11:46.365101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.755 [2024-05-13 03:11:46.373959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.374581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.374869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.374898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.374915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.375176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.375423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.375448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.755 [2024-05-13 03:11:46.375464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.755 [2024-05-13 03:11:46.379086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.755 [2024-05-13 03:11:46.388065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.388567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.388821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.388847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.388863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.389131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.389377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.389401] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.755 [2024-05-13 03:11:46.389417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.755 [2024-05-13 03:11:46.393045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.755 [2024-05-13 03:11:46.402083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.402578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.402966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.403008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.403023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.403283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.403529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.403552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.755 [2024-05-13 03:11:46.403568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.755 [2024-05-13 03:11:46.407195] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.755 [2024-05-13 03:11:46.415975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.416451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.416683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.416721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.416738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.416992] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.417238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.417261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.755 [2024-05-13 03:11:46.417277] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.755 [2024-05-13 03:11:46.420910] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.755 [2024-05-13 03:11:46.429893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.430396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.430653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.430678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.430694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.430995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.431247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.431271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.755 [2024-05-13 03:11:46.431287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.755 [2024-05-13 03:11:46.434918] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.755 [2024-05-13 03:11:46.443923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.444442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.444716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.444743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.444759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.445008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.445254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.445277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.755 [2024-05-13 03:11:46.445293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.755 [2024-05-13 03:11:46.448920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.755 [2024-05-13 03:11:46.457901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.458407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.458664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.458710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.458727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.458970] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.459216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.459240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.755 [2024-05-13 03:11:46.459255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.755 [2024-05-13 03:11:46.462876] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.755 [2024-05-13 03:11:46.471839] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.755 [2024-05-13 03:11:46.472397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.472691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-05-13 03:11:46.472722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.755 [2024-05-13 03:11:46.472754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.755 [2024-05-13 03:11:46.473006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.755 [2024-05-13 03:11:46.473263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.755 [2024-05-13 03:11:46.473293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.756 [2024-05-13 03:11:46.473309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.756 [2024-05-13 03:11:46.476936] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.756 [2024-05-13 03:11:46.485912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.756 [2024-05-13 03:11:46.486536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.486827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.486854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.756 [2024-05-13 03:11:46.486869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.756 [2024-05-13 03:11:46.487132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.756 [2024-05-13 03:11:46.487377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.756 [2024-05-13 03:11:46.487400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.756 [2024-05-13 03:11:46.487415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.756 [2024-05-13 03:11:46.491036] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.756 [2024-05-13 03:11:46.499808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.756 [2024-05-13 03:11:46.500420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.500782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.500811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.756 [2024-05-13 03:11:46.500828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.756 [2024-05-13 03:11:46.501086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.756 [2024-05-13 03:11:46.501334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.756 [2024-05-13 03:11:46.501358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.756 [2024-05-13 03:11:46.501373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.756 [2024-05-13 03:11:46.505011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.756 [2024-05-13 03:11:46.513797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.756 [2024-05-13 03:11:46.514303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.514548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.514577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.756 [2024-05-13 03:11:46.514594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.756 [2024-05-13 03:11:46.514849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.756 [2024-05-13 03:11:46.515095] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.756 [2024-05-13 03:11:46.515119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.756 [2024-05-13 03:11:46.515141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.756 [2024-05-13 03:11:46.518774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.756 [2024-05-13 03:11:46.527762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.756 [2024-05-13 03:11:46.528245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.528558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.528587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.756 [2024-05-13 03:11:46.528604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.756 [2024-05-13 03:11:46.528858] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.756 [2024-05-13 03:11:46.529104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.756 [2024-05-13 03:11:46.529128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.756 [2024-05-13 03:11:46.529144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.756 [2024-05-13 03:11:46.532795] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.756 [2024-05-13 03:11:46.541808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.756 [2024-05-13 03:11:46.542312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.542564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-05-13 03:11:46.542604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:55.756 [2024-05-13 03:11:46.542619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:55.756 [2024-05-13 03:11:46.542868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:55.756 [2024-05-13 03:11:46.543115] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.756 [2024-05-13 03:11:46.543139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.756 [2024-05-13 03:11:46.543154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.756 [2024-05-13 03:11:46.546788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.016 [2024-05-13 03:11:46.555789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.016 [2024-05-13 03:11:46.556300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.556541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.556569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.016 [2024-05-13 03:11:46.556586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.016 [2024-05-13 03:11:46.556839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.016 [2024-05-13 03:11:46.557086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.016 [2024-05-13 03:11:46.557110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.016 [2024-05-13 03:11:46.557125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.016 [2024-05-13 03:11:46.560762] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.016 [2024-05-13 03:11:46.569755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.016 [2024-05-13 03:11:46.570269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.570513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.570553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.016 [2024-05-13 03:11:46.570568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.016 [2024-05-13 03:11:46.570844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.016 [2024-05-13 03:11:46.571091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.016 [2024-05-13 03:11:46.571115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.016 [2024-05-13 03:11:46.571130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.016 [2024-05-13 03:11:46.574763] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.016 [2024-05-13 03:11:46.583742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.016 [2024-05-13 03:11:46.584233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.584498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.584526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.016 [2024-05-13 03:11:46.584544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.016 [2024-05-13 03:11:46.584795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.016 [2024-05-13 03:11:46.585043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.016 [2024-05-13 03:11:46.585066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.016 [2024-05-13 03:11:46.585082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.016 [2024-05-13 03:11:46.588709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.016 [2024-05-13 03:11:46.597677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.016 [2024-05-13 03:11:46.598167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.598383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.598411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.016 [2024-05-13 03:11:46.598428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.016 [2024-05-13 03:11:46.598668] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.016 [2024-05-13 03:11:46.598924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.016 [2024-05-13 03:11:46.598948] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.016 [2024-05-13 03:11:46.598964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.016 [2024-05-13 03:11:46.602585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.016 [2024-05-13 03:11:46.611614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.016 [2024-05-13 03:11:46.612105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.612344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.612374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.016 [2024-05-13 03:11:46.612392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.016 [2024-05-13 03:11:46.612634] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.016 [2024-05-13 03:11:46.612891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.016 [2024-05-13 03:11:46.612916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.016 [2024-05-13 03:11:46.612932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.016 [2024-05-13 03:11:46.616553] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.016 [2024-05-13 03:11:46.625528] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.016 [2024-05-13 03:11:46.626081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.626346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.626374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.016 [2024-05-13 03:11:46.626391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.016 [2024-05-13 03:11:46.626632] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.016 [2024-05-13 03:11:46.626891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.016 [2024-05-13 03:11:46.626916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.016 [2024-05-13 03:11:46.626932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.016 [2024-05-13 03:11:46.630582] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.016 [2024-05-13 03:11:46.639553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.016 [2024-05-13 03:11:46.640049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.640267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.640297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.016 [2024-05-13 03:11:46.640314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.016 [2024-05-13 03:11:46.640556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.016 [2024-05-13 03:11:46.640814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.016 [2024-05-13 03:11:46.640839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.016 [2024-05-13 03:11:46.640854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.016 [2024-05-13 03:11:46.644474] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.016 [2024-05-13 03:11:46.653655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.016 [2024-05-13 03:11:46.654141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.654379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.654407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.016 [2024-05-13 03:11:46.654425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.016 [2024-05-13 03:11:46.654666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.016 [2024-05-13 03:11:46.654922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.016 [2024-05-13 03:11:46.654947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.016 [2024-05-13 03:11:46.654962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.016 [2024-05-13 03:11:46.658580] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.016 [2024-05-13 03:11:46.667552] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.016 [2024-05-13 03:11:46.668051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.668303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.016 [2024-05-13 03:11:46.668331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.016 [2024-05-13 03:11:46.668348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.668589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.668846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.668871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.668887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.672507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.681486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.682006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.682271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.682299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.017 [2024-05-13 03:11:46.682316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.682557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.682816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.682840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.682856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.686473] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.695447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.695959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.696272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.696305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.017 [2024-05-13 03:11:46.696323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.696565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.696823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.696848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.696863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.700482] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.709459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.709967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.710203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.710228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.017 [2024-05-13 03:11:46.710243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.710505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.710765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.710789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.710805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.714429] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.723410] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.723917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.724191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.724219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.017 [2024-05-13 03:11:46.724236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.724477] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.724733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.724757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.724773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.728390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.737362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.737838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.738048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.738075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.017 [2024-05-13 03:11:46.738098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.738339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.738585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.738609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.738625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.742256] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.751246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.751765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.751988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.752013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.017 [2024-05-13 03:11:46.752044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.752286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.752532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.752556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.752572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.756199] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.765174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.765683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.765971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.765999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.017 [2024-05-13 03:11:46.766016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.766256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.766502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.766525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.766541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.770170] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.779147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.779668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.779915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.779958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.017 [2024-05-13 03:11:46.779976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.780222] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.780468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.780492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.780507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.784136] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.793114] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.793627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.793878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.793921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.017 [2024-05-13 03:11:46.793938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.017 [2024-05-13 03:11:46.794180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.017 [2024-05-13 03:11:46.794425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.017 [2024-05-13 03:11:46.794449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.017 [2024-05-13 03:11:46.794464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.017 [2024-05-13 03:11:46.798096] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.017 [2024-05-13 03:11:46.807073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.017 [2024-05-13 03:11:46.807568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.017 [2024-05-13 03:11:46.807795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.018 [2024-05-13 03:11:46.807823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.018 [2024-05-13 03:11:46.807840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.018 [2024-05-13 03:11:46.808081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.018 [2024-05-13 03:11:46.808326] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.018 [2024-05-13 03:11:46.808350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.018 [2024-05-13 03:11:46.808366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.018 [2024-05-13 03:11:46.812002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.277 [2024-05-13 03:11:46.821040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.277 [2024-05-13 03:11:46.821501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-05-13 03:11:46.821772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-05-13 03:11:46.821801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.277 [2024-05-13 03:11:46.821819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.277 [2024-05-13 03:11:46.822059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.277 [2024-05-13 03:11:46.822314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.277 [2024-05-13 03:11:46.822338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.822353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.825986] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.834968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.835473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.835693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.835726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.278 [2024-05-13 03:11:46.835742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.278 [2024-05-13 03:11:46.835989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.278 [2024-05-13 03:11:46.836235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.278 [2024-05-13 03:11:46.836259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.836274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.839900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.848880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.849367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.849604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.849630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.278 [2024-05-13 03:11:46.849645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.278 [2024-05-13 03:11:46.849918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.278 [2024-05-13 03:11:46.850165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.278 [2024-05-13 03:11:46.850188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.850203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.853831] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.862802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.863257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.863493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.863518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.278 [2024-05-13 03:11:46.863533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.278 [2024-05-13 03:11:46.863798] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.278 [2024-05-13 03:11:46.864045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.278 [2024-05-13 03:11:46.864074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.864090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.867715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.876679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.877176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.877415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.877443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.278 [2024-05-13 03:11:46.877461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.278 [2024-05-13 03:11:46.877713] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.278 [2024-05-13 03:11:46.877959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.278 [2024-05-13 03:11:46.877983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.877999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.881618] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.890596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.891080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.891349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.891374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.278 [2024-05-13 03:11:46.891389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.278 [2024-05-13 03:11:46.891659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.278 [2024-05-13 03:11:46.891916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.278 [2024-05-13 03:11:46.891941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.891957] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.895575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.904548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.905059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.905323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.905349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.278 [2024-05-13 03:11:46.905364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.278 [2024-05-13 03:11:46.905630] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.278 [2024-05-13 03:11:46.905888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.278 [2024-05-13 03:11:46.905913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.905933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.909551] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.918522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.919043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.919310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.919338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.278 [2024-05-13 03:11:46.919355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.278 [2024-05-13 03:11:46.919596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.278 [2024-05-13 03:11:46.919854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.278 [2024-05-13 03:11:46.919879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.919894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.923513] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.932486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.932995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.933215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.933244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.278 [2024-05-13 03:11:46.933261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.278 [2024-05-13 03:11:46.933502] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.278 [2024-05-13 03:11:46.933760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.278 [2024-05-13 03:11:46.933785] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.933800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.937417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.946382] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.946879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.947265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.947293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.278 [2024-05-13 03:11:46.947310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.278 [2024-05-13 03:11:46.947551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.278 [2024-05-13 03:11:46.947811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.278 [2024-05-13 03:11:46.947836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.278 [2024-05-13 03:11:46.947851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.278 [2024-05-13 03:11:46.951477] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.278 [2024-05-13 03:11:46.960453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.278 [2024-05-13 03:11:46.960945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.961189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-05-13 03:11:46.961229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.279 [2024-05-13 03:11:46.961244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.279 [2024-05-13 03:11:46.961501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.279 [2024-05-13 03:11:46.961760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.279 [2024-05-13 03:11:46.961785] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.279 [2024-05-13 03:11:46.961800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.279 [2024-05-13 03:11:46.965417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.279 [2024-05-13 03:11:46.974396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.279 [2024-05-13 03:11:46.974897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:46.975325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:46.975376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.279 [2024-05-13 03:11:46.975393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.279 [2024-05-13 03:11:46.975634] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.279 [2024-05-13 03:11:46.975890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.279 [2024-05-13 03:11:46.975915] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.279 [2024-05-13 03:11:46.975930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.279 [2024-05-13 03:11:46.979547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.279 [2024-05-13 03:11:46.988308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.279 [2024-05-13 03:11:46.988825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:46.989059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:46.989085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.279 [2024-05-13 03:11:46.989101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.279 [2024-05-13 03:11:46.989349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.279 [2024-05-13 03:11:46.989595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.279 [2024-05-13 03:11:46.989619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.279 [2024-05-13 03:11:46.989634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.279 [2024-05-13 03:11:46.993263] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.279 [2024-05-13 03:11:47.002242] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.279 [2024-05-13 03:11:47.002755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.003004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.003032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.279 [2024-05-13 03:11:47.003050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.279 [2024-05-13 03:11:47.003291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.279 [2024-05-13 03:11:47.003536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.279 [2024-05-13 03:11:47.003560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.279 [2024-05-13 03:11:47.003574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.279 [2024-05-13 03:11:47.007203] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.279 [2024-05-13 03:11:47.016220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.279 [2024-05-13 03:11:47.016738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.016973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.016999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.279 [2024-05-13 03:11:47.017015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.279 [2024-05-13 03:11:47.017264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.279 [2024-05-13 03:11:47.017510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.279 [2024-05-13 03:11:47.017533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.279 [2024-05-13 03:11:47.017549] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.279 [2024-05-13 03:11:47.021218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 481690 Killed "${NVMF_APP[@]}" "$@" 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.279 [2024-05-13 03:11:47.030747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.279 [2024-05-13 03:11:47.031236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.031462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.031488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.279 [2024-05-13 03:11:47.031504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.279 [2024-05-13 03:11:47.031767] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.279 [2024-05-13 03:11:47.031976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.279 [2024-05-13 03:11:47.031997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.279 [2024-05-13 03:11:47.032029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=482658 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 482658 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 482658 ']' 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:56.279 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.279 [2024-05-13 03:11:47.036420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.279 [2024-05-13 03:11:47.044675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.279 [2024-05-13 03:11:47.045235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.045507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.045536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.279 [2024-05-13 03:11:47.045554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.279 [2024-05-13 03:11:47.045832] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.279 [2024-05-13 03:11:47.046069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.279 [2024-05-13 03:11:47.046095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.279 [2024-05-13 03:11:47.046111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.279 [2024-05-13 03:11:47.049748] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.279 [2024-05-13 03:11:47.058753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.279 [2024-05-13 03:11:47.059227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.059493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.059522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.279 [2024-05-13 03:11:47.059540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.279 [2024-05-13 03:11:47.059822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.279 [2024-05-13 03:11:47.060060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.279 [2024-05-13 03:11:47.060086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.279 [2024-05-13 03:11:47.060101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.279 [2024-05-13 03:11:47.063810] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.279 [2024-05-13 03:11:47.072806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.279 [2024-05-13 03:11:47.073316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.073589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-05-13 03:11:47.073619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.279 [2024-05-13 03:11:47.073637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.279 [2024-05-13 03:11:47.073897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.279 [2024-05-13 03:11:47.074153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.279 [2024-05-13 03:11:47.074178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.280 [2024-05-13 03:11:47.074194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.280 [2024-05-13 03:11:47.077871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.540 [2024-05-13 03:11:47.082820] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:30:56.540 [2024-05-13 03:11:47.082895] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.540 [2024-05-13 03:11:47.086938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.540 [2024-05-13 03:11:47.087469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.087776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.087803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.540 [2024-05-13 03:11:47.087819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.540 [2024-05-13 03:11:47.088070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.540 [2024-05-13 03:11:47.088316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.540 [2024-05-13 03:11:47.088340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.540 [2024-05-13 03:11:47.088355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.540 [2024-05-13 03:11:47.092024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.540 [2024-05-13 03:11:47.100922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.540 [2024-05-13 03:11:47.101422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.101716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.101743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.540 [2024-05-13 03:11:47.101759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.540 [2024-05-13 03:11:47.102013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.540 [2024-05-13 03:11:47.102259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.540 [2024-05-13 03:11:47.102283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.540 [2024-05-13 03:11:47.102299] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.540 [2024-05-13 03:11:47.105882] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.540 [2024-05-13 03:11:47.114940] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.540 [2024-05-13 03:11:47.115457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.115758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.115785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.540 [2024-05-13 03:11:47.115800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.540 [2024-05-13 03:11:47.116046] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.540 [2024-05-13 03:11:47.116293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.540 [2024-05-13 03:11:47.116316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.540 [2024-05-13 03:11:47.116332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.540 [2024-05-13 03:11:47.119914] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.540 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.540 [2024-05-13 03:11:47.128947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.540 [2024-05-13 03:11:47.129442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.129711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.129741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.540 [2024-05-13 03:11:47.129772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.540 [2024-05-13 03:11:47.129900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:56.540 [2024-05-13 03:11:47.130005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.540 [2024-05-13 03:11:47.130261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.540 [2024-05-13 03:11:47.130285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.540 [2024-05-13 03:11:47.130307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.540 [2024-05-13 03:11:47.133886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.540 [2024-05-13 03:11:47.142925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.540 [2024-05-13 03:11:47.143423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.143667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.143716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.540 [2024-05-13 03:11:47.143750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.540 [2024-05-13 03:11:47.143982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.540 [2024-05-13 03:11:47.144252] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.540 [2024-05-13 03:11:47.144276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.540 [2024-05-13 03:11:47.144292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.540 [2024-05-13 03:11:47.147851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.540 [2024-05-13 03:11:47.156894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.540 [2024-05-13 03:11:47.157412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.157712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.157755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.540 [2024-05-13 03:11:47.157771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.540 [2024-05-13 03:11:47.158004] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.540 [2024-05-13 03:11:47.158251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.540 [2024-05-13 03:11:47.158275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.540 [2024-05-13 03:11:47.158290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.540 [2024-05-13 03:11:47.160839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:56.540 [2024-05-13 03:11:47.161897] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.540 [2024-05-13 03:11:47.170800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.540 [2024-05-13 03:11:47.171453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.171796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.171824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.540 [2024-05-13 03:11:47.171845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.540 [2024-05-13 03:11:47.172104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.540 [2024-05-13 03:11:47.172356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.540 [2024-05-13 03:11:47.172381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.540 [2024-05-13 03:11:47.172400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.540 [2024-05-13 03:11:47.176035] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.540 [2024-05-13 03:11:47.184833] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.540 [2024-05-13 03:11:47.185371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.185638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.540 [2024-05-13 03:11:47.185666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.540 [2024-05-13 03:11:47.185693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.540 [2024-05-13 03:11:47.185945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.540 [2024-05-13 03:11:47.186215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.540 [2024-05-13 03:11:47.186240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.186256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.541 [2024-05-13 03:11:47.189845] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.541 [2024-05-13 03:11:47.198645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.541 [2024-05-13 03:11:47.199169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.199414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.199442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.541 [2024-05-13 03:11:47.199461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.541 [2024-05-13 03:11:47.199710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.541 [2024-05-13 03:11:47.199943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.541 [2024-05-13 03:11:47.199964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.199979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.541 [2024-05-13 03:11:47.203569] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.541 [2024-05-13 03:11:47.212605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.541 [2024-05-13 03:11:47.213383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.213692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.213729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.541 [2024-05-13 03:11:47.213750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.541 [2024-05-13 03:11:47.214016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.541 [2024-05-13 03:11:47.214270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.541 [2024-05-13 03:11:47.214295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.214314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.541 [2024-05-13 03:11:47.217893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.541 [2024-05-13 03:11:47.226514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.541 [2024-05-13 03:11:47.227095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.227345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.227388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.541 [2024-05-13 03:11:47.227409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.541 [2024-05-13 03:11:47.227658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.541 [2024-05-13 03:11:47.227915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.541 [2024-05-13 03:11:47.227938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.227954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.541 [2024-05-13 03:11:47.231640] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.541 [2024-05-13 03:11:47.240593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.541 [2024-05-13 03:11:47.241172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.241409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.241436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.541 [2024-05-13 03:11:47.241453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.541 [2024-05-13 03:11:47.241719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.541 [2024-05-13 03:11:47.241942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.541 [2024-05-13 03:11:47.241964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.241989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.541 [2024-05-13 03:11:47.245656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.541 [2024-05-13 03:11:47.254367] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.541 [2024-05-13 03:11:47.254402] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.541 [2024-05-13 03:11:47.254416] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.541 [2024-05-13 03:11:47.254429] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.541 [2024-05-13 03:11:47.254441] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.541 [2024-05-13 03:11:47.254543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.541 [2024-05-13 03:11:47.254627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.541 [2024-05-13 03:11:47.254732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.541 [2024-05-13 03:11:47.254736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.541 [2024-05-13 03:11:47.255035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.255261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.255286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.541 [2024-05-13 03:11:47.255303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.541 [2024-05-13 03:11:47.255520] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.541 [2024-05-13 03:11:47.255782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.541 [2024-05-13 03:11:47.255804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.255819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.541 [2024-05-13 03:11:47.259142] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.541 [2024-05-13 03:11:47.268276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.541 [2024-05-13 03:11:47.268896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.269134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.269161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.541 [2024-05-13 03:11:47.269181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.541 [2024-05-13 03:11:47.269434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.541 [2024-05-13 03:11:47.269654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.541 [2024-05-13 03:11:47.269692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.269717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.541 [2024-05-13 03:11:47.272983] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.541 [2024-05-13 03:11:47.281978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.541 [2024-05-13 03:11:47.282638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.282926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.282953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.541 [2024-05-13 03:11:47.282975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.541 [2024-05-13 03:11:47.283220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.541 [2024-05-13 03:11:47.283442] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.541 [2024-05-13 03:11:47.283464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.283482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.541 [2024-05-13 03:11:47.286620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.541 [2024-05-13 03:11:47.295560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.541 [2024-05-13 03:11:47.296201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.296476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.296502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.541 [2024-05-13 03:11:47.296523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.541 [2024-05-13 03:11:47.296780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.541 [2024-05-13 03:11:47.297001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.541 [2024-05-13 03:11:47.297023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.297042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.541 [2024-05-13 03:11:47.300289] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.541 [2024-05-13 03:11:47.309262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.541 [2024-05-13 03:11:47.309875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.310077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.541 [2024-05-13 03:11:47.310103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.541 [2024-05-13 03:11:47.310124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.541 [2024-05-13 03:11:47.310364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.541 [2024-05-13 03:11:47.310596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.541 [2024-05-13 03:11:47.310618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.541 [2024-05-13 03:11:47.310634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.542 [2024-05-13 03:11:47.313996] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.542 [2024-05-13 03:11:47.322984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.542 [2024-05-13 03:11:47.323561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.542 [2024-05-13 03:11:47.323769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.542 [2024-05-13 03:11:47.323797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.542 [2024-05-13 03:11:47.323819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.542 [2024-05-13 03:11:47.324064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.542 [2024-05-13 03:11:47.324286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.542 [2024-05-13 03:11:47.324307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.542 [2024-05-13 03:11:47.324326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.542 [2024-05-13 03:11:47.327656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.542 [2024-05-13 03:11:47.336543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.542 [2024-05-13 03:11:47.337112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.542 [2024-05-13 03:11:47.337358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.542 [2024-05-13 03:11:47.337385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.542 [2024-05-13 03:11:47.337406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.542 [2024-05-13 03:11:47.337638] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.542 [2024-05-13 03:11:47.337874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.542 [2024-05-13 03:11:47.337898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.542 [2024-05-13 03:11:47.337916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.801 [2024-05-13 03:11:47.341301] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.801 [2024-05-13 03:11:47.350145] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.801 [2024-05-13 03:11:47.350598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.350830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.350857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.801 [2024-05-13 03:11:47.350873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.801 [2024-05-13 03:11:47.351092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.801 [2024-05-13 03:11:47.351322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.801 [2024-05-13 03:11:47.351350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.801 [2024-05-13 03:11:47.351365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.801 [2024-05-13 03:11:47.354538] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.801 [2024-05-13 03:11:47.363905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.801 [2024-05-13 03:11:47.364342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.364571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.364599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.801 [2024-05-13 03:11:47.364615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.801 [2024-05-13 03:11:47.364842] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.801 [2024-05-13 03:11:47.365083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.801 [2024-05-13 03:11:47.365105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.801 [2024-05-13 03:11:47.365119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.801 [2024-05-13 03:11:47.368422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.801 [2024-05-13 03:11:47.377545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.801 [2024-05-13 03:11:47.378072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.378305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.378332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.801 [2024-05-13 03:11:47.378348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.801 [2024-05-13 03:11:47.378588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.801 [2024-05-13 03:11:47.378836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.801 [2024-05-13 03:11:47.378859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.801 [2024-05-13 03:11:47.378873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.801 [2024-05-13 03:11:47.382153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.801 [2024-05-13 03:11:47.391170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.801 [2024-05-13 03:11:47.391624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.391866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.391899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.801 [2024-05-13 03:11:47.391916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.801 [2024-05-13 03:11:47.392149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.801 [2024-05-13 03:11:47.392365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.801 [2024-05-13 03:11:47.392386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.801 [2024-05-13 03:11:47.392399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.801 [2024-05-13 03:11:47.394827] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.801 [2024-05-13 03:11:47.395642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.801 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.801 [2024-05-13 03:11:47.404822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.801 [2024-05-13 03:11:47.405289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.405491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.801 [2024-05-13 03:11:47.405519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.801 [2024-05-13 03:11:47.405535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.801 [2024-05-13 03:11:47.405793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.802 [2024-05-13 03:11:47.406015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.802 [2024-05-13 03:11:47.406056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.802 [2024-05-13 03:11:47.406070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.802 [2024-05-13 03:11:47.409325] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.802 [2024-05-13 03:11:47.418423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.802 [2024-05-13 03:11:47.418883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.802 [2024-05-13 03:11:47.419275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.802 [2024-05-13 03:11:47.419314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.802 [2024-05-13 03:11:47.419329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.802 [2024-05-13 03:11:47.419554] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.802 [2024-05-13 03:11:47.419798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.802 [2024-05-13 03:11:47.419821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.802 [2024-05-13 03:11:47.419834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.802 [2024-05-13 03:11:47.423065] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.802 [2024-05-13 03:11:47.432092] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.802 [2024-05-13 03:11:47.432880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.802 [2024-05-13 03:11:47.433134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.802 [2024-05-13 03:11:47.433160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.802 [2024-05-13 03:11:47.433180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.802 [2024-05-13 03:11:47.433423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.802 [2024-05-13 03:11:47.433644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.802 [2024-05-13 03:11:47.433665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.802 [2024-05-13 03:11:47.433682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.802 [2024-05-13 03:11:47.437042] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.802 Malloc0 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.802 [2024-05-13 03:11:47.445776] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.802 [2024-05-13 03:11:47.446342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.802 [2024-05-13 03:11:47.446582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.802 [2024-05-13 03:11:47.446613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc33460 with addr=10.0.0.2, port=4420 00:30:56.802 [2024-05-13 03:11:47.446631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33460 is same with the state(5) to be set 00:30:56.802 [2024-05-13 03:11:47.446861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33460 (9): Bad file descriptor 00:30:56.802 [2024-05-13 03:11:47.447084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.802 [2024-05-13 03:11:47.447106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.802 [2024-05-13 03:11:47.447120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.802 [2024-05-13 03:11:47.450391] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:56.802 [2024-05-13 03:11:47.457455] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:56.802 [2024-05-13 03:11:47.457729] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.802 [2024-05-13 03:11:47.459367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.802 03:11:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 481975 00:30:56.802 [2024-05-13 03:11:47.495966] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:06.772 00:31:06.772 Latency(us) 00:31:06.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.772 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:06.772 Verification LBA range: start 0x0 length 0x4000 00:31:06.772 Nvme1n1 : 15.01 6721.69 26.26 8579.47 0.00 8340.79 1140.81 21262.79 00:31:06.772 =================================================================================================================== 00:31:06.772 Total : 6721.69 26.26 8579.47 0.00 8340.79 1140.81 21262.79 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:06.772 rmmod nvme_tcp 00:31:06.772 rmmod nvme_fabrics 00:31:06.772 rmmod nvme_keyring 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 482658 ']' 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 482658 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 482658 ']' 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 482658 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 482658 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 482658' 00:31:06.772 killing process with pid 482658 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 482658 00:31:06.772 [2024-05-13 03:11:56.745930] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 482658 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:06.772 03:11:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.687 03:11:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:08.687 00:31:08.687 real 0m22.271s 00:31:08.687 user 0m52.849s 00:31:08.687 sys 0m5.937s 00:31:08.687 03:11:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:08.687 03:11:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:08.687 ************************************ 00:31:08.687 END TEST nvmf_bdevperf 00:31:08.687 ************************************ 00:31:08.687 03:11:59 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:08.687 03:11:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:08.687 03:11:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:08.687 03:11:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.687 ************************************ 00:31:08.687 START TEST nvmf_target_disconnect 00:31:08.687 ************************************ 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:08.687 * Looking for test storage... 00:31:08.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:31:08.687 03:11:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:10.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:10.590 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:10.590 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:10.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:10.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:10.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:31:10.591 00:31:10.591 --- 10.0.0.2 ping statistics --- 00:31:10.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.591 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:31:10.591 00:31:10.591 --- 10.0.0.1 ping statistics --- 00:31:10.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.591 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:10.591 ************************************ 00:31:10.591 START TEST nvmf_target_disconnect_tc1 00:31:10.591 ************************************ 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:10.591 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.591 [2024-05-13 03:12:01.326676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.591 [2024-05-13 03:12:01.326949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.591 [2024-05-13 03:12:01.326977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x540d20 with addr=10.0.0.2, port=4420 00:31:10.591 [2024-05-13 03:12:01.327024] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:10.591 [2024-05-13 03:12:01.327049] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:10.591 [2024-05-13 03:12:01.327063] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:10.591 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:10.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:10.591 Initializing NVMe Controllers 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:31:10.591 00:31:10.591 real 0m0.094s 00:31:10.591 user 0m0.036s 00:31:10.591 sys 0m0.057s 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:10.591 ************************************ 00:31:10.591 END TEST nvmf_target_disconnect_tc1 00:31:10.591 ************************************ 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:10.591 ************************************ 00:31:10.591 START TEST nvmf_target_disconnect_tc2 00:31:10.591 ************************************ 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:10.591 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:10.849 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=485882 00:31:10.849 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:10.849 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 485882 00:31:10.850 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 485882 ']' 00:31:10.850 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.850 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:10.850 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.850 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:10.850 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:10.850 [2024-05-13 03:12:01.440290] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:31:10.850 [2024-05-13 03:12:01.440369] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.850 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.850 [2024-05-13 03:12:01.478404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:10.850 [2024-05-13 03:12:01.507657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:10.850 [2024-05-13 03:12:01.599733] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.850 [2024-05-13 03:12:01.599805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.850 [2024-05-13 03:12:01.599819] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.850 [2024-05-13 03:12:01.599830] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.850 [2024-05-13 03:12:01.599840] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.850 [2024-05-13 03:12:01.599973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:10.850 [2024-05-13 03:12:01.600027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:10.850 [2024-05-13 03:12:01.600077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:10.850 [2024-05-13 03:12:01.600079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.107 Malloc0 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.107 [2024-05-13 03:12:01.770621] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.107 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.108 [2024-05-13 03:12:01.798601] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:11.108 [2024-05-13 03:12:01.798918] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=485925 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:31:11.108 03:12:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:11.108 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.650 03:12:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 485882 00:31:13.650 03:12:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Write completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Write completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Read completed with error (sct=0, sc=8) 00:31:13.650 starting I/O failed 00:31:13.650 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 [2024-05-13 03:12:03.825355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 [2024-05-13 03:12:03.825671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 [2024-05-13 03:12:03.826012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Read completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 Write completed with error (sct=0, sc=8) 00:31:13.651 starting I/O failed 00:31:13.651 [2024-05-13 03:12:03.826294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.652 [2024-05-13 03:12:03.826806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.827019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.827045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.827299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.827512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.827537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.827749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.827962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.827989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.828221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.828656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.828713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.828946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.829258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.829310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.829758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.829961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.829986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.830229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.830424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.830465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.830711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.830892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.830916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.831158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.831549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.831600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.831842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.832028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.832053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.832298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.832655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.832713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.832960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.833324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.833372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.833635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.833860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.833885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.834082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.834429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.834478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.834729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.834961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.834986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.835314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.835732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.835798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.835998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.836189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.836214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.836456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.836646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.836672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.836902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.837159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.837198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.837448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.837749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.837774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.838943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.839326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.839378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.839643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.839895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.839920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.840228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.840511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.840539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.840778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.840980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.841008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.841239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.841455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.841479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.841668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.841920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.841945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.842263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.842448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.842473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.842709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.842959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.842986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.843221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.843450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.843475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.652 qpair failed and we were unable to recover it. 00:31:13.652 [2024-05-13 03:12:03.843740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.652 [2024-05-13 03:12:03.843958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.843985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.844253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.844478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.844502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.844753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.845001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.845028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.845266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.845470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.845500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.845734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.845952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.845979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.846245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.846460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.846489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.846709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.846959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.846988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.847240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.847499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.847526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.847799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.848030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.848053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.848254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.848443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.848467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.848686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.848931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.848956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.849148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.849611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.849659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.849907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.850107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.850131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.850383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.850650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.850674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.850925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.851190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.851218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.851436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.851670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.851704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.851945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.852221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.852245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.852475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.852721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.852746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.852987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.853229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.853256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.853503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.853732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.853758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.853982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.854385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.854408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.854683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.854912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.854939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.855154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.855420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.855444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.855677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.855906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.855931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.856133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.856326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.856349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.856577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.856835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.856861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.857086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.857330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.857355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.857546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.857759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.857785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.857966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.858182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.858206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.858444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.858683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.653 [2024-05-13 03:12:03.858714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.653 qpair failed and we were unable to recover it. 00:31:13.653 [2024-05-13 03:12:03.858965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.859213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.859237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.859469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.859713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.859738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.859964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.860328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.860382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.860640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.860878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.860903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.861125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.861539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.861586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.861823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.862039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.862068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.862419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.862803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.862828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.863061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.863302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.863330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.863634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.863851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.863878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.864069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.864312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.864337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.864592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.864807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.864836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.865069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.865291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.865318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.865589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.865778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.865803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.865996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.866320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.866364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.866619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.866835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.866861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.867086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.867384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.867408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.867679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.867904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.867932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.868195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.868418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.868442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.868662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.869009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.869050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.869268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.869535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.869559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.869794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.870211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.870268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.870485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.870724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.870755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.870995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.871227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.871256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.871483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.871707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.871747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.872011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.872322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.872354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.872619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.872867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.872895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.873108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.873342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.873371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.873588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.873860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.873886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.874135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.874364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.874388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.654 qpair failed and we were unable to recover it. 00:31:13.654 [2024-05-13 03:12:03.874681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.654 [2024-05-13 03:12:03.874980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.875006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.875311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.875715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.875776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.876050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.876301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.876329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.876598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.876867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.876892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.877130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.877331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.877355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.877563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.877773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.877798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.878051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.878294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.878319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.878523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.878739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.878765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.879047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.879292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.879317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.879574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.879797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.879838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.880160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.880427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.880456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.880672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.880952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.880977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.881200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.881424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.881448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.881720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.881948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.881972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.882225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.882455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.882478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.882784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.883051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.883076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.883337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.883609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.883634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.883913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.884283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.884338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.884575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.884819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.884861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.885109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.885319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.885343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.885590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.885851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.885879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.886108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.886355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.886380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.886737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.886975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.887002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.887265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.887544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.887571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.655 qpair failed and we were unable to recover it. 00:31:13.655 [2024-05-13 03:12:03.887830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.655 [2024-05-13 03:12:03.888203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.888227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.888453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.888710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.888738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.889011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.889227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.889251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.889465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.889675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.889711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.889968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.890273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.890312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.890559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.890799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.890828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.891050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.891290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.891314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.891563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.891769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.891798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.892039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.892364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.892426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.892743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.892960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.893002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.893216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.893499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.893523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.893791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.894059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.894084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.894372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.894598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.894630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.894841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.895140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.895194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.895429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.895705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.895734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.895999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.896311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.896338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.896583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.896867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.896891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.897135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.897374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.897402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.897637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.897864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.897890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.898141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.898376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.898403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.898650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.898857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.898883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.899101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.899377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.899401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.899676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.899922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.899954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.900189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.900568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.900623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.900893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.901136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.901176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.901445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.901727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.901752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.902008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.902255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.902294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.902544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.902794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.902834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.903069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.903326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.903351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.903571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.903807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.656 [2024-05-13 03:12:03.903832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.656 qpair failed and we were unable to recover it. 00:31:13.656 [2024-05-13 03:12:03.904047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.904351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.904378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.904646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.904888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.904914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.905155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.905472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.905499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.905788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.906024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.906066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.906333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.906564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.906589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.906786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.907044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.907073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.907300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.907584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.907633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.907890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.908195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.908220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.908445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.908718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.908747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.908976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.909217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.909258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.909504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.909770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.909798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.910037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.910463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.910514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.910793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.911103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.911128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.911419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.911795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.911823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.912063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.912527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.912576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.912848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.913079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.913103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.913415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.913656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.913702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.913917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.914165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.914192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.914443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.914679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.914712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.914906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.915296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.915320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.915572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.915794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.915820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.916012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.916467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.916518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.916755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.917014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.917039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.917269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.917575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.917604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.917827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.918063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.918091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.918305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.918499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.918527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.918769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.919004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.919033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.919272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.919591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.919615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.919887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.920330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.920388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.657 [2024-05-13 03:12:03.920650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.920865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.657 [2024-05-13 03:12:03.920893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.657 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.921136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.921340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.921364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.921679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.921940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.921973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.922268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.922525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.922549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.922760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.923119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.923169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.923423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.923689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.923740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.923981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.924408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.924454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.924718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.924937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.924962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.925230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.925504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.925530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.925753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.925934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.925959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.926209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.926515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.926578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.926826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.927068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.927095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.927360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.927667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.927690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.927932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.928175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.928199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.928420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.928674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.928705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.928995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.929240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.929264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.929485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.929752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.929778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.929972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.930268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.930291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.930511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.930755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.930780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.930980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.931357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.931413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.931677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.931949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.931979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.932221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.932532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.932556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.932879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.933151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.933176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.933371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.933622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.933649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.933881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.934170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.934231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.934531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.934763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.934791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.935069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.935349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.935374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.935601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.935872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.935897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.936152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.936364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.936389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.936602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.936939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.936992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.658 qpair failed and we were unable to recover it. 00:31:13.658 [2024-05-13 03:12:03.937260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.658 [2024-05-13 03:12:03.937618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.937662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.937909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.938161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.938188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.938401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.938639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.938667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.938944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.939189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.939213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.939630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.939890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.939916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.940176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.940476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.940515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.940795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.941040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.941065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.941259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.941457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.941481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.941667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.941878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.941904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.942099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.942342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.942366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.942643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.942898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.942925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.943197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.943432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.943457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.943703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.943922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.943948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.944195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.944469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.944493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.944687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.944928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.944971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.945208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.945435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.945459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.945741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.945951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.945978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.946188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.946454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.946479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.946692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.946892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.946916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.947129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.947366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.947406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.947652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.947878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.947903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.948124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.948430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.948454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.948674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.948909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.948934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.949164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.949350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.949375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.949673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.949950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.949977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.659 [2024-05-13 03:12:03.950205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.950638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.659 [2024-05-13 03:12:03.950694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.659 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.950939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.951179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.951208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.951450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.951657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.951682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.951910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.952145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.952173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.952490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.952758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.952784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.952978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.953219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.953248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.953503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.953759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.953785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.954069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.954434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.954496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.954761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.955007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.955048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.955308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.955526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.955551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.955790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.956030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.956057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.956301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.956545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.956569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.956779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.957050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.957077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.957344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.957532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.957558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.957806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.958084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.958132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.958435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.958710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.958735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.959015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.959398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.959458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.959709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.960069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.960113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.960408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.960665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.960715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.960962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.961203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.961244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.961463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.961677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.961709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.961907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.962282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.962340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.962579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.962814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.962844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.963066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.963390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.963414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.963676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.963861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.963886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.964105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.964375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.964403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.964644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.964857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.964884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.965120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.965364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.965389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.965608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.965876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.965901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.966114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.966322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.966347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.660 qpair failed and we were unable to recover it. 00:31:13.660 [2024-05-13 03:12:03.966611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.966824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.660 [2024-05-13 03:12:03.966850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.967063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.967333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.967358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.967631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.967917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.967943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.968183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.968398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.968425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.968725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.968993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.969023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.969335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.969656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.969703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.969930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.970123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.970149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.970415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.970656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.970703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.970966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.971233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.971281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.971488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.971709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.971739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.971956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.972385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.972439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.972709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.972933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.972962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.973260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.973518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.973545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.973807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.974118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.974145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.974391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.974686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.974718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.974936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.975118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.975143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.975362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.975539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.975564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.975812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.976201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.976255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.976511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.976796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.976824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.977099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.977409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.977436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.977753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.978018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.978046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.978267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.978580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.978612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.978856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.979235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.979285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.979503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.979805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.979831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.980076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.980385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.980443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.980686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.980973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.981001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.981241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.981509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.981533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.981760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.981976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.982000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.982208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.982425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.982450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.982731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.982984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.983011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.661 qpair failed and we were unable to recover it. 00:31:13.661 [2024-05-13 03:12:03.983275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.661 [2024-05-13 03:12:03.983496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.983520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.983799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.984039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.984066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.984285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.984526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.984553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.984786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.985004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.985030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.985349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.985553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.985583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.985826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.986062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.986089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.986295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.986613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.986651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.986883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.987185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.987248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.987492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.987763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.987793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.988061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.988439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.988490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.988740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.989009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.989034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.989276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.989548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.989575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.989899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.990204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.990259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.990494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.990748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.990790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.991041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.991287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.991311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.991535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.991832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.991857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.992111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.992374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.992401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.992640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.992856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.992884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.993124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.993341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.993371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.993615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.993889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.993917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.994236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.994526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.994553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.994794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.995061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.995086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.995398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.995678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.995715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.996024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.996389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.996413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.996731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.996957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.996999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.997234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.997536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.997560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.997854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.998047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.998071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.998333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.998640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.998665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.998925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.999154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.999182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.999422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.999631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:03.999661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.662 qpair failed and we were unable to recover it. 00:31:13.662 [2024-05-13 03:12:03.999915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.662 [2024-05-13 03:12:04.000163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.000191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.000424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.000688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.000725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.000987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.001227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.001256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.001452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.001749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.001775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.002079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.002384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.002412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.002678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.002891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.002918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.003183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.003496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.003547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.003792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.004021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.004044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.004305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.004622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.004664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.004954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.005442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.005491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.005762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.005984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.006009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.006231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.006466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.006493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.006747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.007036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.007064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.007336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.007589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.007614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.007929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.008246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.008284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.008533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.008741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.008766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.009039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.009280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.009304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.009552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.009847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.009876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.010140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.010412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.010439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.010675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.010908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.010934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.011198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.011440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.011467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.011744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.012002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.012026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.012250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.012468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.012494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.012783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.013046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.013073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.013322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.013520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.013545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.013791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.013997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.014025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.014243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.014580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.014633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.014950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.015236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.015260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.015539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.015757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.015786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.016071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.016414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.016438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.663 qpair failed and we were unable to recover it. 00:31:13.663 [2024-05-13 03:12:04.016686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.016990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.663 [2024-05-13 03:12:04.017014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.017310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.017677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.017743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.017949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.018255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.018294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.018541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.018763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.018795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.019056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.019481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.019536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.019781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.020017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.020040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.020362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.020613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.020654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.020906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.021148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.021190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.021440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.021708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.021737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.021955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.022176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.022206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.022478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.022781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.022821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.023183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.023675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.023733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.024034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.024265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.024293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.024554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.024816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.024841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.025073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.025395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.025423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.025663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.025891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.025917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.026139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.026510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.026533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.026765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.027011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.027036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.027250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.027535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.027560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.027823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.028093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.028118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.028430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.028672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.028706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.028949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.029338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.029385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.029626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.029891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.029916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.030146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.030377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.030409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.030647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.030892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.030920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.031128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.031591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.031640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.664 qpair failed and we were unable to recover it. 00:31:13.664 [2024-05-13 03:12:04.031882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.664 [2024-05-13 03:12:04.032093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.032121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.032404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.032627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.032654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.032908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.033155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.033179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.033503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.033744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.033769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.034037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.034356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.034380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.034607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.034866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.034894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.035157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.035466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.035525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.035778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.036055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.036083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.036351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.036626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.036651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.036888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.037157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.037206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.037469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.037717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.037746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.037992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.038231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.038259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.038463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.038728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.038757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.039022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.039371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.039427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.039694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.039916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.039941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.040190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.040454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.040482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.040727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.040940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.040965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.041147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.041375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.041399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.041689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.041972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.042000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.042269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.042486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.042511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.042736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.042958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.042983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.043261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.043497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.043524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.043743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.043934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.043959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.044202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.044420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.044445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.044692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.044921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.044948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.045202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.045550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.045612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.045852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.046084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.046108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.046334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.046723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.046768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.047031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.047278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.047305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.047627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.047852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.047877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.665 qpair failed and we were unable to recover it. 00:31:13.665 [2024-05-13 03:12:04.048099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.665 [2024-05-13 03:12:04.048310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.048339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.048578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.048815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.048843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.049120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.049492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.049540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.049809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.050084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.050111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.050384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.050591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.050615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.050835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.051203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.051259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.051500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.051746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.051774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.052022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.052212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.052237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.052472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.052663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.052692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.052974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.053196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.053223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.053452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.053654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.053679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.053897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.054136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.054161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.054412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.054648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.054675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.054908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.055214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.055266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.055497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.055753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.055779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.056004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.056225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.056249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.056532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.056784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.056809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.057002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.057270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.057321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.057592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.057781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.057810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.058005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.058247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.058287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.058533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.058802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.058830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.059073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.059289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.059313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.059532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.059773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.059816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.060057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.060297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.060324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.060560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.060809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.060834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.061053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.061295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.061322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.061592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.061835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.061863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.062102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.062413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.062465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.062679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.062925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.062953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.063226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.063507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.063532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.666 qpair failed and we were unable to recover it. 00:31:13.666 [2024-05-13 03:12:04.063725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.666 [2024-05-13 03:12:04.063945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.063974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.064215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.064459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.064488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.064762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.065025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.065052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.065265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.065501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.065528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.065789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.066010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.066040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.066302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.066746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.066774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.067020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.067438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.067482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.067720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.067993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.068017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.068235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.068516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.068541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.068820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.069048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.069072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.069292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.069541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.069565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.069834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.070066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.070090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.070344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.070563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.070588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.070881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.071120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.071160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.071411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.071650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.071677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.071906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.072266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.072316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.072600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.072790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.072815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.073046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.073339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.073363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.073797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.074111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.074138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.074370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.074637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.074665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.074915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.075163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.075191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.075463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.075706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.075732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.075956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.076311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.076363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.076605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.076874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.076901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.077131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.077375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.077416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.077679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.077934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.077962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.078205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.078549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.078618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.078864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.079082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.079109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.079345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.079549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.079577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.079841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.080034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.080063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.667 qpair failed and we were unable to recover it. 00:31:13.667 [2024-05-13 03:12:04.080306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.667 [2024-05-13 03:12:04.080541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.080571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.080819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.081041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.081069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.081305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.081637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.081688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.081950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.082167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.082197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.082435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.082684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.082718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.082936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.083198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.083226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.083471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.083744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.083770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.083988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.084207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.084231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.084410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.084642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.084669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.084920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.085185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.085236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.085486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.085733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.085775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.086010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.086242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.086266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.086512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.086751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.086780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.087026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.087234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.087258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.087484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.087723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.087752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.087970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.088358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.088409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.088638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.088895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.088920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.089199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.089572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.089625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.089875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.090125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.090153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.090408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.090718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.090778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.090991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.091203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.091230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.091471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.091752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.091778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.092017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.092237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.092262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.092491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.092768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.092793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.093017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.093260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.093301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.093515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.093746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.093772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.668 qpair failed and we were unable to recover it. 00:31:13.668 [2024-05-13 03:12:04.093977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.668 [2024-05-13 03:12:04.094190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.094218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.094421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.094675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.094707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.094943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.095130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.095155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.095345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.095612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.095639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.095886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.096094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.096122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.096384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.096649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.096676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.096929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.097150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.097175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.097432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.097625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.097650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.097863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.098049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.098074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.098337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.098606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.098633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.098873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.099089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.099114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.099306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.099526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.099554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.099786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.099978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.100019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.100231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.100495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.100524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.100814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.101005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.101046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.102606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.102865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.102893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.103078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.103284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.103309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.103557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.103820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.103845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.104068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.104347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.104372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.104583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.104780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.104807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.105029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.105272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.105298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.105512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.105754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.105781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.106014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.106227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.106252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.106492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.106685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.106720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.106943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.107129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.107158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.107352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.107564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.107591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.107813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.108009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.108034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.108256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.108468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.108493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.108678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.108895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.108927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.109114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.109337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.669 [2024-05-13 03:12:04.109362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.669 qpair failed and we were unable to recover it. 00:31:13.669 [2024-05-13 03:12:04.109611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.109885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.109911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.110138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.110331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.110355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.110577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.110776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.110802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.111020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.111294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.111322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.111671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.111972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.111997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.112317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.112560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.112586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.112827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.113005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.113030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.113221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.113472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.113497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.113759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.113975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.114000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.114211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.114522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.114574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.114842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.115052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.115082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.115353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.115565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.115590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.115779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.115993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.116020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.116230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.116440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.116468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.116680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.116878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.116903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.117154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.117385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.117413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.117665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.117911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.117937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.118201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.118476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.118504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.118786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.119008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.119033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.119285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.119492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.119528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.119793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.120035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.120059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.120309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.120525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.120552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.120795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.121021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.121049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.121293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.121520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.121546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.121804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.122041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.122068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.122309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.122552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.122576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.122834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.123076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.123101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.123310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.123551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.123579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.123811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.124022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.124047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.670 [2024-05-13 03:12:04.124276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.124740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.670 [2024-05-13 03:12:04.124772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.670 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.125017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.125259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.125285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.125537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.125792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.125819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.126099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.126343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.126371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.126625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.126847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.126874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.127101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.127364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.127391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.127598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.127847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.127877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.128092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.128385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.128411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.128662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.128939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.128965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.129181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.129431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.129456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.129676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.129877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.129902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.130107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.130328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.130356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.130626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.130880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.130909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.131154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.131422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.131447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.131667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.131885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.131910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.132128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.132507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.132555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.132779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.133028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.133057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.133265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.133660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.133735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.133981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.134196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.134221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.134556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.134820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.134848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.135065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.135308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.135335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.135748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.135984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.136009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.136193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.136437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.136465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.136682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.136907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.136935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.137162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.137430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.137455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.137648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.137929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.137957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.138208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.138477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.138501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.138752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.138964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.138991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.139236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.139476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.139501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.139727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.139982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.140009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.671 [2024-05-13 03:12:04.140230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.140468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.671 [2024-05-13 03:12:04.140497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.671 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.140742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.140979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.141007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.141292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.141608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.141640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.141890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.142129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.142157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.142392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.142610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.142639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.142857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.143092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.143120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.143415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.143651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.143678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.143920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.144136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.144160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.144414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.144631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.144655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.144862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.145111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.145138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.145380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.145620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.145661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.145889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.146084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.146109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.146299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.146540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.146569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.146808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.146995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.147020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.147246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.147454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.147479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.147672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.147879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.147904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.148131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.148368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.148395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.148609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.148885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.148911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.149157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.149389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.149417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.149674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.149862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.149887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.150117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.150376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.150403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.150643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.150836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.150862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.151111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.151322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.151347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.151539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.151788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.151814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.152004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.152233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.152269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.152537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.152768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.152793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.153010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.153275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.153303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.153535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.153786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.153815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.154039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.154232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.154256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.154469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.154693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.154725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.672 [2024-05-13 03:12:04.154965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.155226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.672 [2024-05-13 03:12:04.155251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.672 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.155480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.155723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.155749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.155949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.156201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.156229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.156473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.156669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.156703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.156951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.157196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.157221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.157435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.157654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.157678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.157896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.158073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.158097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.158363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.158756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.158785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.159013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.159257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.159284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.159540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.159781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.159806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.160002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.160194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.160218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.160495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.160716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.160758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.161005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.161293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.161320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.161580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.161830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.161858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.162106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.162291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.162315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.162560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.162840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.162866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.163081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.163297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.163322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.163513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.163725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.163767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.163962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.164210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.164237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.164435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.164639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.164666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.164887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.165101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.165130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.165367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.165610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.165637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.165880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.166080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.166105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.166323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.166575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.166600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.166832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.167052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.167077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.167279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.167467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.673 [2024-05-13 03:12:04.167492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-05-13 03:12:04.167722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.167954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.167979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.168195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.168436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.168463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.168683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.168936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.168964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.169201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.169476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.169504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.169764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.169967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.169993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.170241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.170474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.170501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.170771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.170988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.171015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.171232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.171505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.171532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.171811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.172021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.172048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.172247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.172490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.172536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.172773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.172989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.173014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.173255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.173496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.173523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.173790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.174001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.174030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.174268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.174482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.174509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.174723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.174936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.174964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.175181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.175398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.175423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.175668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.175969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.175997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.176200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.176419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.176447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.176652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.176872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.176900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.177175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.177454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.177478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.177708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.177932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.177956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.178153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.178372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.178396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.178587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.178782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.178811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.179009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.179228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.179252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.179464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.179656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.179680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.180424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.180668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.180702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.180910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.181165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.181190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.181390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.181575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.181600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.181852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.182073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.182098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-05-13 03:12:04.182341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.674 [2024-05-13 03:12:04.182579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.182608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.182868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.183139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.183167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.183444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.183675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.183716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.183934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.184148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.184175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.184443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.184684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.184717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.184916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.185130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.185155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.185545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.185780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.185805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.186046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.186272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.186313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.186576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.186767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.186793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.186971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.187173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.187198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.187434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.187638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.187663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.187858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.188052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.188077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.188343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.188544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.188569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.188791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.188996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.189023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.189287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.189536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.189562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.189749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.189947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.189972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.190220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.190507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.190550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.190791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.190995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.191025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.191239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.191455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.191479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.191707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.191890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.191915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.192162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.192406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.192445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.192682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.192880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.192904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.193120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.193386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.193413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.193657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.193860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.193884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.194119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.194366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.194394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.194622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.194815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.194840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.195032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.195245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.195272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.195555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.195793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.195818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.196027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.196296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.196324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-05-13 03:12:04.196590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.196835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.675 [2024-05-13 03:12:04.196860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.197050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.197291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.197321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.197556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.197767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.197795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.198010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.198196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.198221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.198466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.198681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.198714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.198906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.199084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.199112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.199358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.199591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.199615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.199839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.200028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.200054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.200297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.200635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.200662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.200914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.201155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.201183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.201417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.201675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.201706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.201949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.202231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.202258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.202497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.202741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.202769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.203008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.203253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.203280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.203545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.203822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.203851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.204074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.204337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.204369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.204607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.204844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.204872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.205092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.205341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.205387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.205627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.205878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.205906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.206149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.206410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.206437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.206647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.206890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.206919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.207162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.207460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.207510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.207768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.207989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.208017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.208250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.208503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.208546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.208817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.209060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.209088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.209355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.209595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.209640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.209921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.210141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.210165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.210404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.210611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.210638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.210882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.211172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.211204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.211432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.211677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.676 [2024-05-13 03:12:04.211713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.676 qpair failed and we were unable to recover it. 00:31:13.676 [2024-05-13 03:12:04.211925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.212192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.212217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.212489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.212752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.212791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.213048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.213279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.213307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.213543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.213809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.213837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.214042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.214282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.214307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.214571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.214837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.214862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.215086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.215353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.215398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.215631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.215873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.215899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.216139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.216364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.216391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.216603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.216841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.216869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.217137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.217360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.217405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.217662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.217944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.217971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.218211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.218423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.218451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.219246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.219544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.219592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.219830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.220068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.220096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.220303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.220532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.220557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.220747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.221012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.221037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.221225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.221407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.221432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.221665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.221925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.221953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.222193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.222999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.223031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.223297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.223618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.223643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.223869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.224086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.224111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.224392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.224609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.224635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.224911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.225170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.225195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.225466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.225715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.225744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.677 qpair failed and we were unable to recover it. 00:31:13.677 [2024-05-13 03:12:04.225990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.677 [2024-05-13 03:12:04.226235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.226279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.226532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.226754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.226785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.227022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.227326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.227353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.227554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.227806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.227831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.228042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.228280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.228308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.228537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.228775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.228803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.229046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.229240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.229265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.229478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.230206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.230239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.230491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.230731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.230760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.230982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.231248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.231314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.231586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.231843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.231872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.232117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.232359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.232387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.232659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.232911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.232936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.233202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.233462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.233487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.233708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.233968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.233996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.234238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.234616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.234664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.234924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.235168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.235196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.235460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.235719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.235748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.235981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.236243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.236271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.236508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.236753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.236782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.237025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.237264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.237288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.237521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.237754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.237782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.238009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.238238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.238265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.238527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.238758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.238786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.239025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.239261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.239287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.239531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.239795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.239824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.240089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.240492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.240539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.240786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.241050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.241077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.241321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.241534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.241558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.241829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.242093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.678 [2024-05-13 03:12:04.242117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.678 qpair failed and we were unable to recover it. 00:31:13.678 [2024-05-13 03:12:04.242331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.242514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.242538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.242792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.242988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.243012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2be50 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.243129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39970 is same with the state(5) to be set 00:31:13.679 [2024-05-13 03:12:04.243481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.243809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.243842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.244079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.244263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.244290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.244538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.244764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.244792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.245020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.245390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.245440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.245718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.245958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.245986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.246305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.246645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.246691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.246953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.247411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.247461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.247749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.248025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.248054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.248386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.248751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.248780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.249020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.249279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.249304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.249533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.249751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.249777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.250012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.250253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.250278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.250633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.250931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.250960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.251237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.251504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.251533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.251817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.252025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.252050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.252262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.252491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.252516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.252875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.253122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.253150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.253469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.253731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.253758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.253983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.254239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.254278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.254518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.254742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.254769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.255020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.255412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.255457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.255730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.255956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.255981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.256244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.256686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.256763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.257031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.257289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.257314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.257718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.257982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.258012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.258253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.258475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.258501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.679 qpair failed and we were unable to recover it. 00:31:13.679 [2024-05-13 03:12:04.258728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.258940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.679 [2024-05-13 03:12:04.258969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.259221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.259433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.259458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.259708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.259982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.260010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.260318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.260632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.260660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.260918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.261400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.261451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.261754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.262028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.262056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.262385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.262727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.262756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.263002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.263223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.263253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.263534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.263774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.263800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.264041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.264241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.264266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.264525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.264716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.264742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.264959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.265198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.265223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.265460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.265706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.265732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.265953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.266142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.266167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.266433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.266683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.266722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.266972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.267209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.267239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.267669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.267971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.268012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.268281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.268593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.268642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.268886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.269075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.269099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.269310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.269583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.269607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.269836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.270067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.270092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.270344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.270581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.270610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.270885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.271221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.271270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.271501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.271750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.271776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.272017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.272248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.272272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.272492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.272728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.272753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.272957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.273204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.273244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.273488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.273724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.273753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.273990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.274220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.274246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.274477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.274704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.680 [2024-05-13 03:12:04.274730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.680 qpair failed and we were unable to recover it. 00:31:13.680 [2024-05-13 03:12:04.274984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.275411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.275439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.275683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.275952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.275992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.276234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.276471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.276495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.276740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.277008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.277036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.277282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.277520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.277548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.277806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.278052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.278077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.278323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.278551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.278576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.278798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.279060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.279085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.279410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.279707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.279750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.279985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.280222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.280262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.280486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.280737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.280762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.280984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.281180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.281205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.281457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.281700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.281729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.281937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.282171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.282201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.282470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.282693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.282724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.282981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.283205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.283231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.283466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.283703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.283732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.283952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.284215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.284242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.284486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.284725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.284754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.285027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.285293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.285338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.285656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.285927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.285956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.286166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.286391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.286415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.286610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.286856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.286882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.287102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.287325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.287350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.287608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.287839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.287865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.288145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.288580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.288631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.288886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.289127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.289168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.289447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.289723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.289753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.289995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.290332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.290378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.681 [2024-05-13 03:12:04.290668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.290903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.681 [2024-05-13 03:12:04.290931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.681 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.291176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.291395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.291420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.291708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.291947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.291976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.292179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.292422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.292447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.292711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.292933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.292958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.293211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.293436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.293461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.293787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.294031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.294071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.294342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.294743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.294793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.295072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.295265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.295289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.295474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.295724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.295750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.295943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.296178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.296203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.296511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.296727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.296753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.296975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.297220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.297260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.297513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.297763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.297789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.298007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.298432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.298481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.298791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.299042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.299074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.299318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.299738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.299766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.300010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.300209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.300233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.300460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.300723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.300748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.301032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.301327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.301390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.301603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.301881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.301907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.302134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.302376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.302400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.302653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.302894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.302924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.303169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.303486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.303514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.303787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.304002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.304041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.304288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.304766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.682 [2024-05-13 03:12:04.304799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.682 qpair failed and we were unable to recover it. 00:31:13.682 [2024-05-13 03:12:04.305043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.305376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.305421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.305764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.306057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.306085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.306327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.306660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.306709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.306987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.307428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.307477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.307686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.307937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.307962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.308282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.308524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.308549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.308819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.309030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.309055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.309276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.309467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.309491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.309731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.309947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.309972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.310195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.310416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.310447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.310715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.310934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.310959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.311185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.311436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.311461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.311765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.312187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.312237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.312559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.312863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.312892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.313138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.313367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.313391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.313622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.313842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.313869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.314111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.314324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.314349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.314601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.314874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.314900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.315176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.315610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.315672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.315951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.316246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.316274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.316551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.316789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.316824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.317086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.317320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.317344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.317599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.317861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.317901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.318142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.318380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.318405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.318668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.318912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.318941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.319141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.319392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.319417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.319681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.319896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.319921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.320171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.320435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.320478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.683 [2024-05-13 03:12:04.320748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.321017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.683 [2024-05-13 03:12:04.321041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.683 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.321224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.321435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.321459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.321686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.321894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.321919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.322206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.322434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.322459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.322636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.322864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.322890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.323131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.323400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.323429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.323646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.323901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.323927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.324204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.324438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.324468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.324734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.324951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.324977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.325438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.325665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.325693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.325950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.326154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.326182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.326397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.326660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.326687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.326947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.327168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.327192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.327477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.327719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.327748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.327994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.328194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.328218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.328470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.328683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.328719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.328981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.329217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.329246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.329508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.329754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.329780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.330009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.330198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.330223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.330513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.330731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.330758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.330944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.331157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.331183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.331425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.331644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.331672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.331900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.332154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.332178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.332448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.332632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.332657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.332881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.333066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.333092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.333304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.333537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.333562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.333804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.334108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.334156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.334396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.334579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.334604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.334804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.335018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.335043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.335285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.335511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.335536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.684 qpair failed and we were unable to recover it. 00:31:13.684 [2024-05-13 03:12:04.335757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.684 [2024-05-13 03:12:04.335980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.336005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.336243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.336507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.336535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.336778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.337016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.337041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.337253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.337430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.337455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.337674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.337901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.337927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.338113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.338357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.338382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.338627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.338847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.338873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.339064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.339277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.339303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.339518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.339730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.339756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.339974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.340187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.340213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.340390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.340570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.340595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.340811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.341054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.341079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.341287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.341470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.341495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.341682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.341907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.341932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.342130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.342364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.342389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.342608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.342801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.342827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.343045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.343235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.343260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.343473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.343693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.343725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.343923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.344115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.344139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.344353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.344536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.344561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.344760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.344947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.344974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.345185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.345373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.345398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.345590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.345786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.345813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.346053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.346265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.346290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.346483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.346728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.346754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.346989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.347204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.347230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.347470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.347710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.347745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.347972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.348166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.348191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.348403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.348593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.348620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.685 [2024-05-13 03:12:04.348821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.349040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.685 [2024-05-13 03:12:04.349065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.685 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.349322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.349540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.349565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.349786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.350005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.350031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.350288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.350529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.350558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.350790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.351011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.351038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.351282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.351518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.351548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.351782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.351983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.352013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.352325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.352556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.352585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.352823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.353086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.353111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.353323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.353564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.353590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.353808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.354080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.354105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.354332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.354543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.354569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.354775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.354994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.355019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.355219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.355525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.355567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.355881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.356103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.356127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.356377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.356611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.356640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.356853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.357049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.357074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.357291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.357507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.357533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.357787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.358006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.358036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.358254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.358475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.358500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.358770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.359013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.359042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.359275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.359575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.359631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.359878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.360071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.360096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.360337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.360582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.360607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.360842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.361087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.361112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.361302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.361505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.686 [2024-05-13 03:12:04.361531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.686 qpair failed and we were unable to recover it. 00:31:13.686 [2024-05-13 03:12:04.361759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.361950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.361976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.362191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.362415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.362441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.362651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.362843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.362870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.363069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.363289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.363314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.363503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.363705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.363735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.363956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.364234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.364259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.364515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.364774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.364803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.365043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.365290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.365330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.365657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.365924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.365952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.366164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.366405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.366433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.366705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.366923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.366948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.367150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.367329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.367353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.367570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.367803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.367828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.368052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.368496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.368548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.368761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.368971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.369012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.369327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.369564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.369593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.369816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.370084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.370112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.370383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.370671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.370700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.370962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.371204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.371229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.371471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.371723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.371749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.371996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.372314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.372371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.372606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.372877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.372906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.373151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.373587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.373637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.373946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.374199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.374224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.374472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.374677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.374708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.374928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.375192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.375216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.375450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.375650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.375674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.375957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.376277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.376305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.376565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.376811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.376837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.687 qpair failed and we were unable to recover it. 00:31:13.687 [2024-05-13 03:12:04.377080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.377284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.687 [2024-05-13 03:12:04.377309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.377528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.377750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.377776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.377990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.378201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.378226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.378457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.378728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.378754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.378997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.379228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.379254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.379471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.379725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.379751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.379975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.380202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.380227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.380474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.380688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.380724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.380943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.381315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.381347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.381552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.381781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.381807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.382076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.382526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.382575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.382810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.383047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.383071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.383319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.383537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.383565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.383829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.384066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.384090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.384292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.384532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.384557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.384777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.385029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.385054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.385300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.385541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.385570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.385776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.386023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.386061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.386243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.386426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.386455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.386688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.386891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.386916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.387167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.387413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.387443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.387723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.387964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.387992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.388253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.388455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.388481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.388708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.388927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.388953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.389193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.389424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.389448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.389764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.390041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.390095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.390365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.390593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.390618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.390857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.391079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.391105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.391404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.391602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.391631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.391840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.392031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.688 [2024-05-13 03:12:04.392057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.688 qpair failed and we were unable to recover it. 00:31:13.688 [2024-05-13 03:12:04.392276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.392557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.392583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.392833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.393037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.393065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.393304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.393521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.393550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.393761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.393943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.393968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.394219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.394462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.394510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.394824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.395204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.395264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.395558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.395799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.395829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.396055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.396303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.396329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.396579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.396795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.396829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.397097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.397342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.397381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.397597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.397807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.397837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.398088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.398308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.398333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.398659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.398934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.398960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.399186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.399532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.399579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.399828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.400201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.400253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.400550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.400797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.400824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.401074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.401279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.401308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.401522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.401797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.401824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.402024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.402325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.402349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.402580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.402825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.402866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.403115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.403360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.403387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.403722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.403939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.403964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.404194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.404434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.404458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.404662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.404970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.405012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.405263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.405493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.405518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.405788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.406001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.406030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.406261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.406505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.406534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.406817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.407061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.407101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.407317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.407541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.407565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.689 qpair failed and we were unable to recover it. 00:31:13.689 [2024-05-13 03:12:04.407829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.408031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.689 [2024-05-13 03:12:04.408071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.408279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.408457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.408481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.408757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.409007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.409032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.409285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.409746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.409775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.410063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.410383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.410411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.410673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.410946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.410972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.411228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.411447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.411472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.411706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.411937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.411963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.412233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.412538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.412562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.412833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.413218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.413270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.413601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.413884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.413913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.414183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.414603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.414652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.414900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.415212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.415270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.415723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.416022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.416050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.416290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.416523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.416547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.416816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.417079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.417107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.417465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.417708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.417751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.417944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.418146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.418171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.418402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.418624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.418649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.418906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.419134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.419160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.419378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.419607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.419632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.419909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.420265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.420310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.420596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.420822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.420862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.421109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.421428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.421456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.421730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.421982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.422010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.422240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.422673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.422729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.422998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.423236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.423261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.423523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.423770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.423800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.424132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.424467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.424516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.424785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.425052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.690 [2024-05-13 03:12:04.425080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.690 qpair failed and we were unable to recover it. 00:31:13.690 [2024-05-13 03:12:04.425292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.425530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.425558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.425799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.425977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.426001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.426230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.426610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.426671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.426913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.427291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.427346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.427591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.427810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.427836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.428135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.428721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.428751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.429027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.429472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.429523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.429791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.430015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.430045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.430283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.430502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.430530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.430777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.431027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.431051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.431275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.431510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.431534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.431780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.432045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.432070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.432377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.432679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.432726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.432966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.433157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.433184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.433396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.433652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.433680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.433929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.434132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.434157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.434383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.434578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.434603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.434832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.435049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.435073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.435341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.435589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.435614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.435801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.436019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.436044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.436267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.436513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.436540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.436725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.436945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.436971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.437186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.437428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.437453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.437650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.437895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.437921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.438109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.438337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.438362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.691 qpair failed and we were unable to recover it. 00:31:13.691 [2024-05-13 03:12:04.438653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.691 [2024-05-13 03:12:04.438880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.438906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.692 qpair failed and we were unable to recover it. 00:31:13.692 [2024-05-13 03:12:04.439122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.439328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.439354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.692 qpair failed and we were unable to recover it. 00:31:13.692 [2024-05-13 03:12:04.439623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.439813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.439840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.692 qpair failed and we were unable to recover it. 00:31:13.692 [2024-05-13 03:12:04.440030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.440215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.440254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.692 qpair failed and we were unable to recover it. 00:31:13.692 [2024-05-13 03:12:04.440492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.440750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.440778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.692 qpair failed and we were unable to recover it. 00:31:13.692 [2024-05-13 03:12:04.441031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.441285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.692 [2024-05-13 03:12:04.441333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.692 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.441647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.441852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.441878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.442063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.442306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.442332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.442523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.442714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.442741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.442946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.443182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.443207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.443564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.443892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.443918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.444155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.444386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.444415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.444652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.444876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.444902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.445089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.445322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.445347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.445578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.445770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.445797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.446009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.446260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.446286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.446730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.447017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.447045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.447275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.447758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.447784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.448006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.448200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.448225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.448443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.448634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.448659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.448861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.449056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.449082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.449322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.449507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.449532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.449726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.449972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.449997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.450188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.450429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.450455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.450706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.450899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.450924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.451199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.451457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.451482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.451724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.451942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.451967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.452152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.452369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.452394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.452614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.452847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.452873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.453107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.453345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.453370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.453644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.453891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.453917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.454167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.454368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.454396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.454596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.454850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.454879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.955 [2024-05-13 03:12:04.455143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.455581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.955 [2024-05-13 03:12:04.455636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.955 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.455885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.456100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.456129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.456371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.456578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.456606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.456838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.457039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.457066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.457280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.457502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.457527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.457746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.458019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.458045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.458260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.458502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.458527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.458781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.458978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.459003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.459189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.459405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.459429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.459716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.459945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.459972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.460238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.460538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.460566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.460780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.461003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.461031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.461252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.461437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.461478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.461710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.461898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.461924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.462203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.462475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.462503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.462746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.462967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.462993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.463276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.463694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.463759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.463997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.464301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.464324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.464581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.464834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.464860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.465100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.465316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.465341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.465702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.465957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.465987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.466252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.466463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.466489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.466740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.466956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.467000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.467285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.467539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.467564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.467825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.468022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.468063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.468304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.468504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.468528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.468759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.468971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.469010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.469230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.469476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.469515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.469779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.470009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.470039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.470245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.470586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.470644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.956 qpair failed and we were unable to recover it. 00:31:13.956 [2024-05-13 03:12:04.470906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.956 [2024-05-13 03:12:04.471133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.471158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.471430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.471658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.471683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.471982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.472382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.472438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.472709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.472934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.472962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.473183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.473469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.473493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.473729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.473946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.473971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.474178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.474422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.474462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.474728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.474947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.474972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.475191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.475410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.475436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.475636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.475852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.475878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.476105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.476369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.476394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.476645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.476903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.476929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.477234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.477643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.477704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.477930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.478181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.478206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.478456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.478703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.478732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.478985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.479257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.479285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.479548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.479792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.479818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.480066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.480268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.480292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.480563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.480817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.480847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.481112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.481381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.481427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.481673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.481895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.481924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.482131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.482387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.482413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.482656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.482922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.482958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.483177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.483421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.483461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.483709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.484026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.484073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.484290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.484764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.484794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.485073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.485574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.485626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.485881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.486110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.486135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.486380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.486605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.486630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.957 [2024-05-13 03:12:04.486931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.487189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.957 [2024-05-13 03:12:04.487217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.957 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.487507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.487760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.487789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.488056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.488322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.488368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.488777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.488991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.489018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.489229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.489498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.489524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.489779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.490013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.490037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.490258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.490522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.490568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.490810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.491030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.491057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.491371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.491616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.491645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.491895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.492156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.492181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.492415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.492627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.492653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.492957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.493388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.493437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.493710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.494052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.494090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.494387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.494727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.494754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.494956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.495175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.495201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.495384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.495583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.495624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.495844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.496040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.496067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.496299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.496541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.496566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.496788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.497020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.497048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.497312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.497551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.497579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.497813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.498057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.498087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.498357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.498592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.498620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.498883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.499096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.499122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.499307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.499549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.499574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.499855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.500055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.500083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.500306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.500526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.500551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.500778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.500978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.501021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.501240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.501654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.501711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.501952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.502186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.502212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.502480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.502746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.502788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.958 qpair failed and we were unable to recover it. 00:31:13.958 [2024-05-13 03:12:04.503037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.958 [2024-05-13 03:12:04.503346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.503392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.503631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.503882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.503907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.504178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.504595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.504642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.504904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.505108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.505132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.505425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.505637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.505662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.505923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.506127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.506155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.506390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.506635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.506663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.506912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.507221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.507247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.507458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.507702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.507744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.507975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.508296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.508350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.508583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.508807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.508833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.509055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.509254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.509279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.509535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.509778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.509804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.510019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.510231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.510255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.510513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.510733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.510759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.510995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.511319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.511347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.511690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.511915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.511939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.512197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.512557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.512610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.512872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.513102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.513127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.513352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.513589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.513615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.513944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.514224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.514273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.514515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.514756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.514784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.515052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.515296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.515324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.515564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.515766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.515794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.516061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.516571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.516619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.959 qpair failed and we were unable to recover it. 00:31:13.959 [2024-05-13 03:12:04.516888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.959 [2024-05-13 03:12:04.517125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.517150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.517369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.517620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.517648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.517930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.518305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.518356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.518725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.518994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.519022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.519237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.519512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.519559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.519808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.520024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.520051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.520312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.520559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.520598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.520873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.521088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.521113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.521398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.521615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.521639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.521865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.522090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.522114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.522382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.522663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.522688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.522926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.523224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.523253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.523504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.523756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.523782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.524067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.524298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.524323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.524595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.524843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.524878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.525088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.525325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.525365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.525633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.525858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.525884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.526081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.526380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.526404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.526709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.526963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.527003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.527327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.527648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.527712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.527951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.528225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.528254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.528577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.528842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.528872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.529114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.529440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.529468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.529736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.530225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.530274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.530636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.530893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.530918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.531175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.531398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.531423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.531709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.531976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.532004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.532212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.532423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.532448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.532690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.532901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.532927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.960 [2024-05-13 03:12:04.533186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.533418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.960 [2024-05-13 03:12:04.533448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.960 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.533730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.534026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.534054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.534301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.534764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.534792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.535057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.535476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.535526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.535799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.536017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.536045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.536256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.536648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.536704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.536981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.537329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.537373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.537669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.537931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.537960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.538200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.538505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.538529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.538836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.539090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.539118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.539407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.539769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.539798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.540067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.540446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.540494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.540727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.540969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.541009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.541311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.541658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.541711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.541993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.542281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.542305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.542535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.542755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.542781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.543025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.543263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.543288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.543504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.543728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.543755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.544001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.544268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.544296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.544527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.544728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.544753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.544961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.545276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.545316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.545531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.545772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.545801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.546067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.546308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.546333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.546615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.546846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.546875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.547111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.547341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.547366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.547670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.547930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.547960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.548173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.548568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.548618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.548886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.549202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.549257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.549545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.549832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.549862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.550081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.550316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.550346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.961 qpair failed and we were unable to recover it. 00:31:13.961 [2024-05-13 03:12:04.550608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.961 [2024-05-13 03:12:04.550840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.550875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.551136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.551369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.551394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.551653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.551877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.551906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.552140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.552580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.552631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.552857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.553117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.553141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.553402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.553605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.553633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.553902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.554185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.554210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.554450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.554680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.554715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.555072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.555410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.555443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.555691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.555949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.555978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.556187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.556375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.556406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.556634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.556865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.556891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.557109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.557355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.557381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.557589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.557835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.557861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.558077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.558291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.558316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.558538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.558781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.558822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.559066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.559271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.559296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.559599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.559854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.559880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.560127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.560386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.560431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.560694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.560942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.560967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.561186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.561410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.561439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.561694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.561946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.561972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.562161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.562428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.562468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.562770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.563179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.563223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.563481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.563726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.563757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.563998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.564255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.564282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.564493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.564736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.564763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.565046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.565302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.565328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.565566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.565758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.565787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.566002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.566210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.962 [2024-05-13 03:12:04.566235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.962 qpair failed and we were unable to recover it. 00:31:13.962 [2024-05-13 03:12:04.566562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.566799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.566837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.567079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.567288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.567313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.567503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.567728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.567756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.568087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.568329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.568357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.568599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.568852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.568878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.569147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.569389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.569416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.569660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.569899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.569925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.570161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.570415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.570440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.570704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.570951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.570979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.571222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.571482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.571521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.571725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.571948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.571973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.572408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.572742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.572768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.573043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.573375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.573403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.573682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.573914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.573942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.574186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.574382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.574408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.574636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.574862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.574889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.575110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.575338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.575364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.575618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.575851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.575877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.576122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.576397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.576446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.576759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.577023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.577048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.577265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.577497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.577522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.577808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.578055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.578080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.578275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.578470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.578496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.578742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.578960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.578985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.579171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.579378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.579403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.963 [2024-05-13 03:12:04.579637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.579851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.963 [2024-05-13 03:12:04.579877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.963 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.580102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.580357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.580382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.580600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.580840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.580869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.581082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.581301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.581326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.581684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.581975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.582000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.582243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.582490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.582515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.582722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.582942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.582969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.583216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.583441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.583467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.583757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.583990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.584015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.584224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.584449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.584475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.584723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.584926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.584955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.585218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.585486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.585533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.585774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.585992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.586018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.586228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.586408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.586435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.586617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.586859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.586885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.587104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.587304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.587344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.587623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.587874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.587902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.588145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.588366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.588391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.588634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.588869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.588894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.589148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.589548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.589598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.589846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.590108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.590136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.590396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.590618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.590643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.590832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.591051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.591077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.591352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.591545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.591570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.591785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.592031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.592056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.592335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.592575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.592603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.592848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.593116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.593142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.593396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.593640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.593668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.593941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.594242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.594289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.594534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.594800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.594829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.964 qpair failed and we were unable to recover it. 00:31:13.964 [2024-05-13 03:12:04.595063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.964 [2024-05-13 03:12:04.595285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.595310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.595533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.595774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.595800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.596042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.596231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.596257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.596469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.596654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.596679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.596885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.597080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.597105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.597347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.597564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.597591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.597817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.598042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.598068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.598339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.598595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.598623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.598853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.599093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.599139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.599382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.599599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.599624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.599841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.600059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.600085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.600296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.600513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.600538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.600758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.600945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.600970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.601183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.601396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.601421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.601604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.601818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.601844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.602056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.602249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.602273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.602495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.602778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.602804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.603021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.603265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.603290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.603516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.603743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.603770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.603988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.604233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.604261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.604503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.604761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.604791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.605054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.605267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.605293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.605511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.605728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.605755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.606006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.606260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.606288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.606493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.606714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.606740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.606962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.607180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.607205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.607428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.607626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.607652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.607855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.608071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.608096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.608307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.608521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.608546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.608768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.608958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.965 [2024-05-13 03:12:04.608984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.965 qpair failed and we were unable to recover it. 00:31:13.965 [2024-05-13 03:12:04.609200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.609413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.609440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.609669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.609863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.609888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.610101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.610288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.610313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.610511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.610752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.610778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.611024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.611264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.611292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.611530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.611747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.611777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.612007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.612199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.612226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.612457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.612678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.612720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.612983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.613223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.613248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.613494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.613745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.613772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.614018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.614286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.614332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.614576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.614796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.614825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.615061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.615274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.615299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.615515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.615757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.615783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.616039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.616521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.616573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.616788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.617031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.617056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.617279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.617463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.617490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.617676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.617942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.617968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.618198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.618400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.618427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.618625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.618866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.618892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.619114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.619353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.619378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.619615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.619829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.619855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.620076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.620313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.620338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.620609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.620797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.620823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.621012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.621227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.621252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.621465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.621658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.621686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.621917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.622144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.622171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.622388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.622632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.622657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.622889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.623073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.623099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.966 [2024-05-13 03:12:04.623285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.623468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.966 [2024-05-13 03:12:04.623493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.966 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.623710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.623931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.623956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.624144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.624361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.624386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.624605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.624825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.624852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.625032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.625248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.625273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.625468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.625682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.625715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.625967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.626184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.626210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.626396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.626613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.626639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.626861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.627055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.627080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.627298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.627487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.627512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.627773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.627991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.628022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.628268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.628481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.628506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.628751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.629087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.629116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.629335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.629578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.629604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.629861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.630109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.630134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.630362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.630572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.630602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.630813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.631005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.631030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.631248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.631455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.631484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.631680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.631932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.631957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.632146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.632413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.632439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.632681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.632942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.632968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.633189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.633502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.633545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.633801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.633990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.634015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.634223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.634447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.634473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.634664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.634888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.634914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.635133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.635351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.635377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.635634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.635845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.635871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.636091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.636279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.636309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.636528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.636728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.636768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.637007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.637224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.967 [2024-05-13 03:12:04.637249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.967 qpair failed and we were unable to recover it. 00:31:13.967 [2024-05-13 03:12:04.637470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.637659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.637684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.637952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.638194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.638222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.638487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.638707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.638733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.638945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.639165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.639190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.639379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.639595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.639621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.639877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.640142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.640187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.640429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.640690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.640731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.640948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.641142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.641174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.641416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.641635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.641663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.641861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.642109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.642135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.642322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.642574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.642599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.642883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.643103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.643128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.643366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.643581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.643609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.643828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.644070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.644095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.644371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.644633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.644661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.644906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.645349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.645397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.645636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.645856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.645882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.646102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.646322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.646352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.646630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.646846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.646872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.647087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.647330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.647354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.647583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.647794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.647820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.648058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.648303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.648329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.648546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.648765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.648791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.649005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.649233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.649261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.968 [2024-05-13 03:12:04.649471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.649682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.968 [2024-05-13 03:12:04.649718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.968 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.649938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.650156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.650181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.650404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.650596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.650620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.650839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.651056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.651080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.651335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.651634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.651658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.651901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.652097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.652124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.652382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.652623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.652651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.652921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.653155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.653181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.653408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.653634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.653658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.653905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.654122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.654147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.654440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.654721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.654750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.654997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.655191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.655217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.655466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.655709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.655735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.655984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.656260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.656288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.656568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.656841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.656867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.657112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.657336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.657365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.657600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.657818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.657844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.658089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.658347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.658372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.658663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.658908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.658937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.659171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.659426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.659451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.659726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.659993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.660021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.660265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.660680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.660746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.660995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.661246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.661271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.661559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.661808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.661837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.662109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.662540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.662589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.662875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.663162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.663190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.663434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.663709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.663738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.663959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.664218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.664242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.664481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.664720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.664748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.664964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.665422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.665472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.969 qpair failed and we were unable to recover it. 00:31:13.969 [2024-05-13 03:12:04.665684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.969 [2024-05-13 03:12:04.665933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.665961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.666256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.666735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.666764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.667010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.667245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.667273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.667502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.667747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.667776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.668054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.668303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.668328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.668572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.668810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.668839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.669070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.669307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.669334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.669598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.669811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.669838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.670136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.670596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.670648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.670867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.671220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.671268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.671584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.671817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.671845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.672087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.672315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.672339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.672599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.672837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.672866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.673115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.673350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.673375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.673621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.673893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.673919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.674138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.674400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.674425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.674641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.674896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.674922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.675168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.675396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.675421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.675647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.675860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.675886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.676113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.676361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.676401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.676675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.676957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.676986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.677251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.677463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.677492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.677771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.677979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.678009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.678266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.678533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.678557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.678798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.679000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.679030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.679338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.679564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.679589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.679839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.680094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.680119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.680375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.680612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.680641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.680895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.681137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.681162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.681450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.681656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.970 [2024-05-13 03:12:04.681686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.970 qpair failed and we were unable to recover it. 00:31:13.970 [2024-05-13 03:12:04.681938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.682184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.682212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.682455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.682672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.682706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.682951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.683357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.683403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.683715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.683965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.683992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.684217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.684460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.684485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.684723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.684944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.684970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.685214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.685406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.685431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.685677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.685945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.685971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.686255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.686497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.686525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.686815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.687070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.687094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.687312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.687493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.687518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.687772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.687959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.687985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.688177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.688456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.688481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.688743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.689063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.689092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.689336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.689554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.689584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.689834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.690079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.690120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.690339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.690600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.690625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.690893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.691139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.691164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.691430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.691703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.691745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.692116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.692451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.692478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.692727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.692928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.692954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.693245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.693621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.693645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.693881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.694087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.694111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.694329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.694538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.694564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.694774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.694998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.695038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.695257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.695476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.695501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.695764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.695986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.696012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.696266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.696681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.696748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.697048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.697350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.697375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.971 qpair failed and we were unable to recover it. 00:31:13.971 [2024-05-13 03:12:04.697637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.697876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.971 [2024-05-13 03:12:04.697907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.698175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.698459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.698483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.698753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.699058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.699116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.699327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.699775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.699804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.700043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.700368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.700396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.700648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.700912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.700939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.701163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.701591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.701637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.701875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.702099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.702123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.702358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.702605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.702631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.702888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.703133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.703173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.703457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.703692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.703728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.704012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.704341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.704369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.704612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.704863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.704892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.705160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.705613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.705663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.705978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.706318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.706343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.706596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.706849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.706880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.707086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.707575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.707627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.707919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.708315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.708365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.708639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.708881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.708909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.709156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.709395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.709422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.709707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.710010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.710049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.710321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.710564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.710603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.710878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.711096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.711122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.711347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.711604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.711629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.711868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.712086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.712111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.712337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.712585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.972 [2024-05-13 03:12:04.712625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.972 qpair failed and we were unable to recover it. 00:31:13.972 [2024-05-13 03:12:04.712885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.713104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.713132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.713342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.713570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.713599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.713828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.714209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.714265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.714564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.714792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.714818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.715036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.715231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.715256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.715497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.715738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.715764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.715981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.716209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.716234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.716479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.716667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.716693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.716951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.717432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.717481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.717791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.718021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.718055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.718321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.718794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.718823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.719052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.719270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.719300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.719532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.719777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.719804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.720071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.720287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.720312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.720540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.720748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.720774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.720971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.721222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.721247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.721559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.721831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.721860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.722105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.722403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.722427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.722680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.722878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.722903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.723110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.723360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.723407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.723686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.723941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.723966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.724368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.724688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.724741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.725002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.725402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.725453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.725709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.725942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.725967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.726189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.726468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.726493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.726717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.726926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.726953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.727165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.727385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.727410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.727667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.727902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.727928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.728199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.728442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.728467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.973 qpair failed and we were unable to recover it. 00:31:13.973 [2024-05-13 03:12:04.728693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.973 [2024-05-13 03:12:04.728962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.728996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.729233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.729484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.729524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.729770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.729986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.730025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.730275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.730615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.730673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.730921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.731098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.731123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.731355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.731606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.731631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.731849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.732044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.732068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.732330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.732582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.732611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.732895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.733314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.733364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.733609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.733905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.733935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.734225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.734616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.734673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.734930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.735362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.735413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.735683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.735935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.735964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.736243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.736683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.736739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.736983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.737420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.737465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.737775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.738020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.738048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.738266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.738518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.738543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.738777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.739016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.739045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.739276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.739478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.739506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.739816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.740050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.740074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.740371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.740618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.740644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.740883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.741069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.741094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.741292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.741573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.741599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.741857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.742060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.742086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.742389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.742640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.742665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.742897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.743119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.743143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.743339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.743592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.743616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.743874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.744092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.744117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.744484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.744793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.744822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.974 [2024-05-13 03:12:04.745063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.745289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.974 [2024-05-13 03:12:04.745314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.974 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.745561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.745825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.745855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.746079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.746322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.746347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.746613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.746827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.746853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.747066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.747275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.747301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.747518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.747736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.747762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.747985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.748249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.748274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.748535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.748755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.748782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.749005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.749250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.749277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.749473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.749666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.749692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.749920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.750123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.750147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.750358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.750594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.750621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.750852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.751070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.975 [2024-05-13 03:12:04.751095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:13.975 qpair failed and we were unable to recover it. 00:31:13.975 [2024-05-13 03:12:04.751311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.243 [2024-05-13 03:12:04.751530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.243 [2024-05-13 03:12:04.751555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.243 qpair failed and we were unable to recover it. 00:31:14.243 [2024-05-13 03:12:04.751774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.243 [2024-05-13 03:12:04.752012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.243 [2024-05-13 03:12:04.752037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.243 qpair failed and we were unable to recover it. 00:31:14.243 [2024-05-13 03:12:04.752259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.243 [2024-05-13 03:12:04.752491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.243 [2024-05-13 03:12:04.752520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.243 qpair failed and we were unable to recover it. 00:31:14.243 [2024-05-13 03:12:04.752739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.243 [2024-05-13 03:12:04.752953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.752981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.753213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.753433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.753458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.753725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.754006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.754035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.754275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.754551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.754577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.754855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.755093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.755122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.755389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.755651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.755690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.756014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.756360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.756385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.756601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.756824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.756850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.757052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.757259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.757284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.757509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.757759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.757785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.758116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.758369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.758398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.758631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.758897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.758923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.759198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.759584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.759608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.759844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.760089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.760114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.760377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.760675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.760711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.760962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.761221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.761245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.761569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.761847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.761873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.762091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.762367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.762392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.762678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.762925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.762954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.763234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.763543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.763572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.763850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.764066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.764092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.764340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.764607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.764635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.764854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.765056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.765080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.765301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.765529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.765553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.765755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.765943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.765968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.766181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.766415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.766441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.766692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.766916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.766942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.767226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.767510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.767534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.767784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.768023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.768051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.244 qpair failed and we were unable to recover it. 00:31:14.244 [2024-05-13 03:12:04.768287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.768717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.244 [2024-05-13 03:12:04.768780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.769026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.769228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.769258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.769515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.769759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.769786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.770010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.770500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.770553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.770825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.771268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.771319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.771615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.771900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.771929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.772179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.772400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.772424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.772733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.773034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.773084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.773324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.773531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.773556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.773816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.774143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.774208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.774451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.774700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.774729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.774956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.775198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.775238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.775449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.775703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.775727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.775976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.776448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.776496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.776735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.776983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.777023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.777279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.777476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.777501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.777728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.777973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.777998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.778308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.778604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.778632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.778875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.779126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.779152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.779403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.779648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.779673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.779926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.780144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.780169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.780453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.780692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.780724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.780947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.781273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.781301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.781575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.781832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.781859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.782075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.782273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.782298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.782507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.782799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.782825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.783108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.783319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.783348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.783582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.783853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.783882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.784148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.784644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.784692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.784945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.785205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.245 [2024-05-13 03:12:04.785230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.245 qpair failed and we were unable to recover it. 00:31:14.245 [2024-05-13 03:12:04.785471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.785665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.785690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.785979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.786275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.786299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.786612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.786845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.786875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.787145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.787349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.787373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.787597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.787813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.787839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.788052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.788260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.788285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.788530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.788735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.788760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.788967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.789191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.789217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.789414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.789619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.789644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.789852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.790051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.790075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.790285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.790573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.790599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.790870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.791092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.791117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.791439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.791676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.791712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.791923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.792139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.792169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.792439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.792658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.792689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.792978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.793464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.793515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.793768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.794014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.794053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.794300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.794625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.794654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.794908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.795152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.795192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.795457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.795705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.795734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.795943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.796181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.796221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.796475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.796719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.796748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.796985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.797315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.797343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.797572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.797793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.797819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.798037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.798325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.798398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.798611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.798862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.798888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.799107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.799325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.799350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.799571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.799814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.799847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.800088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.800289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.246 [2024-05-13 03:12:04.800320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.246 qpair failed and we were unable to recover it. 00:31:14.246 [2024-05-13 03:12:04.800576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.800796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.800823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.801063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.801381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.801406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.801668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.801895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.801928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.802191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.802423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.802449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.802716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.802953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.802979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.803324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.803551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.803576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.803834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.804064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.804089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.804319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.804570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.804599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.804847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.805169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.805231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.805479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.805706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.805733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.805981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.806483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.806533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.806806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.807152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.807204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.807470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.807752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.807781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.808026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.808274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.808302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.808580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.808795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.808821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.809013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.809275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.809300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.809546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.809927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.809955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.810195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.810639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.810689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.810932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.811240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.811287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.811562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.811830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.811859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.812123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.812429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.812471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.812708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.812920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.812946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.813268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.813519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.813544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.813811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.814193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.814250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.814558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.814827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.814857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.815123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.815424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.815449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.815734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.815975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.816003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.816283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.816728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.816784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.817028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.817232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.817261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.247 qpair failed and we were unable to recover it. 00:31:14.247 [2024-05-13 03:12:04.817512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.817793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.247 [2024-05-13 03:12:04.817822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.818035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.818280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.818305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.818579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.818844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.818874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.819090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.819479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.819531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.819844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.820150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.820174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.820460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.820704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.820733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.820967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.821168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.821193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.821413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.821672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.821721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.821958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.822163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.822189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.822440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.822722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.822752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.822969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.823316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.823378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.823583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.823791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.823820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.824067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.824269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.824295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.824536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.824775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.824800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.825032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.825280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.825305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.825592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.825832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.825859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.826083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.826270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.826295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.826533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.826799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.826841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.827130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.827396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.827425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.827679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.827919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.827945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.828230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.828528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.828555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.828775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.828984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.829027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.829276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.829481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.829522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.248 [2024-05-13 03:12:04.829837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.830056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.248 [2024-05-13 03:12:04.830086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.248 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.830329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.830567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.830596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.830845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.831040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.831068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.831364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.831615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.831644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.831856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.832096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.832123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.832344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.832538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.832564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.832804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.833057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.833097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.833328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.833560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.833589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.833842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.834088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.834114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.834394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.834601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.834627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.834844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.835089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.835115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.835323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.835540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.835566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.835859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.836092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.836133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.836377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.836625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.836654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.836892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.837095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.837123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.837460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.837745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.837773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.838022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.838239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.838264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.838486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.838794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.838821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.839081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.839345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.839374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.839631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.839872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.839898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.840130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.840388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.840414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.840663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.840895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.840922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.841150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.841420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.841445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.841678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.841881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.841908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.842150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.842401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.842442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.842705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.842919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.842948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.843178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.843399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.843425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.843706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.843951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.843980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.844220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.844487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.844516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.844780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.845060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.845087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.249 qpair failed and we were unable to recover it. 00:31:14.249 [2024-05-13 03:12:04.845291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.249 [2024-05-13 03:12:04.845501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.845528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.845810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.846069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.846095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.846358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.846559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.846585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.846796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.847055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.847082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.847344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.847602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.847628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.847821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.848117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.848144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.848432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.848706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.848735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.848967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.849427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.849478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.849729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.849972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.849998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.850252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.850524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.850550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.850743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.851040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.851065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.851317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.851539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.851565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.851843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.852226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.852276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.852520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.852760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.852787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.853047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.853276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.853301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.853557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.853794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.853824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.854063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.854276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.854304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.854578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.854795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.854824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.855039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.855281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.855307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.855557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.855825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.855855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.856120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.856319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.856346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.856597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.856839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.856867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.857159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.857573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.857624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.857867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.858089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.858115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.858332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.858542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.858568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.858775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.859015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.859041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.859293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.859474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.859499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.859693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.859948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.859975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.860206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.860441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.860469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.250 [2024-05-13 03:12:04.860676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.860905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.250 [2024-05-13 03:12:04.860931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.250 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.861152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.861355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.861381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.861601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.861822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.861850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.862062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.862320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.862346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.862577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.862815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.862845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.863084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.863324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.863353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.863578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.863786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.863813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.864060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.864282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.864314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.864554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.864799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.864829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.865086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.865310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.865336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.865585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.865818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.865845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.866089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.866402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.866443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.866717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.866998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.867023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.867253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.867499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.867525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.867751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.867967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.867995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.868227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.868469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.868510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.868771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.868974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.869003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.869265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.869564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.869589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.869819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.870043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.870069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.870293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.870533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.870559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.870820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.871082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.871111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.871389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.871620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.871646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.871917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.872120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.872149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.872394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.872626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.872655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.872873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.873061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.873088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.873299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.873550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.873576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.873814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.874163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.874222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.874456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.874683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.874725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.874954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.875169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.875198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.875438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.875679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.251 [2024-05-13 03:12:04.875717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.251 qpair failed and we were unable to recover it. 00:31:14.251 [2024-05-13 03:12:04.875958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.876203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.876244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.876566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.876802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.876831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.877066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.877305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.877331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.877549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.877761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.877788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.878006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.878228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.878254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.878567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.878835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.878865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.879079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.879299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.879325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.879567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.879833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.879863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.880126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.880434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.880459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.880674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.880931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.880971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.881228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.881621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.881677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.881929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.882419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.882468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.882682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.882956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.882982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.883277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.883488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.883514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.883743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.883987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.884012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.884284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.884676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.884733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.884969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.885179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.885205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.885433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.885737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.885763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.886018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.886231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.886263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.886506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.886756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.886797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.887046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.887390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.887438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.887704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.887994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.888023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.888290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.888592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.888623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.888901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.889145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.889170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.889433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.889674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.252 [2024-05-13 03:12:04.889710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.252 qpair failed and we were unable to recover it. 00:31:14.252 [2024-05-13 03:12:04.889948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.890216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.890246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.890512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.890767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.890797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.891022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.891255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.891281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.891565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.891810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.891844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.892084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.892563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.892613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.892845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.893061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.893100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.893351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.893714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.893777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.894052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.894481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.894533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.894837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.895070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.895100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.895368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.895804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.895834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.896093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.896322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.896346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.896592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.896894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.896924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.897188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.897532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.897560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.897770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.898005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.898040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.898276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.898523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.898549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.898772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.899017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.899046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.899323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.899584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.899624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.899928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.900179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.900205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.900466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.900706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.900747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.900941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.901149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.901174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.901371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.901622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.901651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.901899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.902176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.902200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.902455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.902685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.902723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.902995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.903405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.903463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.903692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.903926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.903951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.904186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.904469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.904494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.904793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.905117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.905174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.905501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.905818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.905843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.906147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.906536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.906561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.253 qpair failed and we were unable to recover it. 00:31:14.253 [2024-05-13 03:12:04.906848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.253 [2024-05-13 03:12:04.907102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.907141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.907371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.907642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.907671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.907959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.908215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.908254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.908485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.908810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.908840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.909074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.909281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.909305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.909582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.909918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.909944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.910376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.910813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.910843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.911076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.911527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.911577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.911807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.912158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.912214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.912473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.912649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.912674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.912881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.913175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.913200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.913456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.913720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.913750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.914008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.914296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.914321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.914760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.915015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.915055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.915353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.915597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.915622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.915895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.916361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.916410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.916656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.916900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.916931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.917175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.917382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.917407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.917669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.917937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.917966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.918205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.918652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.918683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.918941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.919183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.919212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.919410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.919654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.919680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.920104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.920396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.920425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.920711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.920956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.920997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.921285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.921683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.921746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.921998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.922252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.922277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.922532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.922768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.922798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.923238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.923571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.923599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.923816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.924029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.924071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.254 [2024-05-13 03:12:04.924313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.924557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.254 [2024-05-13 03:12:04.924598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.254 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.924859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.925095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.925119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.925504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.925727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.925758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.926025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.926538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.926587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.926901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.927149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.927180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.927423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.927685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.927739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.928035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.928483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.928539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.928755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.929052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.929093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.929370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.929683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.929730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.930014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.930410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.930466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.930748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.930967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.930993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.931212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.931452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.931477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.931761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.932120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.932178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.932405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.932635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.932660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.932996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.933273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.933297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.933525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.933820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.933847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.934147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.934395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.934419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.934687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.934943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.934971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.935297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.935775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.935805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.936051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.936304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.936346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.936572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.936814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.936855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.937076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.937303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.937328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.937547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.937739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.937766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.938069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.938320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.938361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.938634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.938903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.938933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.939165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.939403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.939429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.939730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.940000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.940028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.940304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.940671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.940708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.940956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.941368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.941418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.941706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.941966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.942017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.255 qpair failed and we were unable to recover it. 00:31:14.255 [2024-05-13 03:12:04.942283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.255 [2024-05-13 03:12:04.942572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.942637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.942923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.943159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.943184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.943470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.943735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.943766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.944053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.944415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.944443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.944762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.945033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.945061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.945301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.945777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.945806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.946070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.946308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.946339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.946668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.946941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.946970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.947205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.947411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.947441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.947679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.947984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.948024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.948262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.948559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.948585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.948914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.949375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.949426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.949652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.949929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.949958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.950295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.950659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.950705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.950968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.951229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.951255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.951549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.951797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.951827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.952090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.952298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.952322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.952592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.952899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.952929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.953189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.953500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.953543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.953782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.954022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.954048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.954348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.954562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.954591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.954835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.955286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.955339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.955642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.955892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.955923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.956169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.956664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.956734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.956979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.957378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.957427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.957756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.958005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.958036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.958280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.958583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.958642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.958921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.959144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.959170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.959401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.959691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.959752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.256 qpair failed and we were unable to recover it. 00:31:14.256 [2024-05-13 03:12:04.959972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.960279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.256 [2024-05-13 03:12:04.960319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.960563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.960806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.960836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.961110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.961359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.961387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.961664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.961903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.961943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.962219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.962461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.962485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.962775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.963033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.963078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.963356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.963553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.963580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.963847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.964077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.964106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.964359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.964585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.964611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.964865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.965066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.965091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.965332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.965596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.965621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.965938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.966364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.966414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.966657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.966928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.966957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.967198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.967530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.967560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.967812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.968064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.968090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.968343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.968584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.968613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.968851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.969092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.969121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.969406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.969677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.969722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.969948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.970208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.970237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.970441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.970692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.970739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.971131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.971608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.971662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.971930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.972156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.972187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.972442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.972728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.972755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.973069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.973593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.973622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.257 [2024-05-13 03:12:04.973873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.974085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.257 [2024-05-13 03:12:04.974114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.257 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.974378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.974595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.974620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.974840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.975080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.975106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.975320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.975593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.975622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.975903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.976204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.976229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.976483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.976758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.976786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.977096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.977528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.977586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.977855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.978128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.978157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.978387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.978647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.978671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.978961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.979208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.979238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.979506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.979804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.979830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.980102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.980321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.980346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.980595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.980831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.980869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.981148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.981488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.981522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.981783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.982054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.982079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.982313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.982774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.982803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.983052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.983279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.983304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.983563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.983849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.983878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.984220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.984465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.984491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.984739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.984938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.984966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.985182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.985440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.985466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.985711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.985939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.985967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.986291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.986732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.986783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.987012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.987278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.987308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.987573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.987815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.987857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.988181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.988731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.988778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.989115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.989427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.989458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.989710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.989942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.989972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.990198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.990436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.990465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.990690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.990964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.990991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.258 qpair failed and we were unable to recover it. 00:31:14.258 [2024-05-13 03:12:04.991238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.258 [2024-05-13 03:12:04.991554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.991589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.991837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.992093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.992117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.992351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.992581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.992605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.992913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.993171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.993215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.993470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.993733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.993762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.993989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.994235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.994260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.994535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.994771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.994801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.995041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.995492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.995543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.995800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.996041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.996081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.996314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.996771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.996801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.997026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.997498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.997549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.997813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.998045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.998069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.998370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.998648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.998672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.998925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.999258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.999295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:04.999687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:04.999979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.000005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.000300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.000541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.000570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.000814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.001112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.001137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.001397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.001683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.001720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.001959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.002382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.002434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.002674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.002927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.002953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.003192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.003423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.003447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.003708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.003923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.003953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.004192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.004458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.004487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.004791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.005079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.005108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.005319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.005561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.005591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.005867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.006303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.006351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.006556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.006796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.006838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.007141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.007334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.007359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.007645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.007906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.007936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.008218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.008665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.259 [2024-05-13 03:12:05.008728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.259 qpair failed and we were unable to recover it. 00:31:14.259 [2024-05-13 03:12:05.008970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.009278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.009303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.009619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.009913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.009939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.010189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.010439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.010478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.010756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.010972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.011001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.011273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.011770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.011800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.012063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.012434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.012491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.012759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.013021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.013046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.013353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.013608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.013643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.013926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.014347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.014401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.014642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.014907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.014937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.015217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.015651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.015709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.015998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.016347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.016376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.016638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.016918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.016948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.017207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.017475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.017504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.017750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.017997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.018023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.018284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.018564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.018588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.018825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.019141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.019188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.019474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.019726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.019770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.020167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.020517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.020545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.020855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.021286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.021337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.021604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.021847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.021874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.022122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.022318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.022342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.022616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.022906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.022936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.023237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.023492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.023521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.023793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.024051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.024090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.024343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.024542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.024571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.024841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.025091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.025130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.025415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.025671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.025710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.025985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.026264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.026288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.260 qpair failed and we were unable to recover it. 00:31:14.260 [2024-05-13 03:12:05.026508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.260 [2024-05-13 03:12:05.026760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.026802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.027079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.027345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.027374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.027643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.028006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.028038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.028268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.028505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.028530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.028778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.029067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.029128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.029405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.029670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.029709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.029979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.030299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.030345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.030576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.030831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.030861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.031108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.031369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.031411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.031708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.031987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.032020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.032227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.032656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.032725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.033011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.033470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.033523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.033773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.033973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.034010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.034255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.034485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.034517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.034764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.034992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.035021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.261 [2024-05-13 03:12:05.035313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.035526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.261 [2024-05-13 03:12:05.035555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.261 qpair failed and we were unable to recover it. 00:31:14.529 [2024-05-13 03:12:05.035797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.529 [2024-05-13 03:12:05.036188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.529 [2024-05-13 03:12:05.036244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.529 qpair failed and we were unable to recover it. 00:31:14.529 [2024-05-13 03:12:05.036499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.529 [2024-05-13 03:12:05.036745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.529 [2024-05-13 03:12:05.036772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.529 qpair failed and we were unable to recover it. 00:31:14.529 [2024-05-13 03:12:05.036992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.529 [2024-05-13 03:12:05.037179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.529 [2024-05-13 03:12:05.037207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.529 qpair failed and we were unable to recover it. 00:31:14.529 [2024-05-13 03:12:05.037423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.529 [2024-05-13 03:12:05.037720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.529 [2024-05-13 03:12:05.037755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.529 qpair failed and we were unable to recover it. 00:31:14.529 [2024-05-13 03:12:05.038012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.529 [2024-05-13 03:12:05.038525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.038575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.038907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.039332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.039384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.039631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.039848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.039879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.040125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.040400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.040425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.040614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.040847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.040874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.041149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.041390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.041416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.041640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.041915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.041941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.042183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.042442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.042468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.042738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.042951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.042977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.043211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.043551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.043582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.043848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.044321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.044374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.044630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.044885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.044916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.045143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.045383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.045409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.045620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.045861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.045887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.046074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.046298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.046324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.046583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.046854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.046884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.047140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.047377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.047403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.047645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.047918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.047948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.048169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.048409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.048437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.048625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.048809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.048835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.049074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.049308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.049334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.049636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.049922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.049951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.050185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.050433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.050474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.050716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.050948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.050976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.051230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.051440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.051471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.051744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.051942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.051967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.052193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.052420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.530 [2024-05-13 03:12:05.052445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.530 qpair failed and we were unable to recover it. 00:31:14.530 [2024-05-13 03:12:05.052711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.052958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.052999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.053292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.053556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.053585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.053827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.054040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.054069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.054339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.054756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.054786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.055050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.055412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.055466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.055708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.055953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.055995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.056282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.056757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.056787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.057233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.057714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.057781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.058051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.058461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.058513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.058780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.058978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.059019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.059307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.059734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.059786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.060055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.060347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.060379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.060626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.060900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.060930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.061130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.061379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.061419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.061639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.061910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.061940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.062209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.062503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.062528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.062762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.063036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.063062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.063343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.063680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.063741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.063998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.064282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.064307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.064534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.064827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.064854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.065109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.065354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.065381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.065599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.065853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.065880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.066149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.066344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.066370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.066624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.066888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.066918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.067168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.067432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.067462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.067740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.067986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.068016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.068283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.068565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.068591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.531 qpair failed and we were unable to recover it. 00:31:14.531 [2024-05-13 03:12:05.068858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.069094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.531 [2024-05-13 03:12:05.069120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.069406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.069683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.069731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.069985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.070265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.070295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.070536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.070801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.070831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.071068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.071274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.071299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.071514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.071766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.071793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.072028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.072289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.072329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.072592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.072833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.072859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.073122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.073462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.073492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.073743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.073961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.073999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.074232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.074444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.074469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.074717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.074944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.074989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.075280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.075667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.075734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.076028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.076272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.076298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.076537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.076787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.076828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.077070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.077316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.077367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.077609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.077850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.077892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.078137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.078396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.078422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.078658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.078891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.078923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.079197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.079571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.079639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.079909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.080151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.080192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.080430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.080703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.080738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.081002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.081307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.081336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.081769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.082004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.082030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.082261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.082453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.082480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.082760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.083005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.083031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.083284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.083526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.083555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.532 [2024-05-13 03:12:05.083820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.084041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.532 [2024-05-13 03:12:05.084067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.532 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.084318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.084567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.084597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.084812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.085080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.085109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.085312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.085560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.085586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.085818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.086042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.086073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.086350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.086620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.086649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.086906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.087178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.087207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.087477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.087719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.087761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.088005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.088269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.088310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.088558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.088804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.088846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.089228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.089558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.089587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.089858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.090088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.090118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.090363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.090601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.090631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.090853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.091055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.091081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.091341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.091583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.091609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.091831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.092075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.092102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.092348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.092576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.092602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.092844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.093064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.093090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.093315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.093583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.093609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.093833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.094089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.094114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.094345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.094621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.094662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.094918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.095150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.095180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.095470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.095720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.095748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.095970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.096178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.096204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.096403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.096737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.096782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.097018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.097268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.097295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.097532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.097798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.097825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.098072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.098508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.098563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.533 qpair failed and we were unable to recover it. 00:31:14.533 [2024-05-13 03:12:05.098806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.533 [2024-05-13 03:12:05.099009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.099035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.099235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.099442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.099467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.099707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.099911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.099937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.100147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.100382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.100408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.100704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.100976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.101005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.101278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.101724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.101753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.102014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.102269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.102294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.102581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.102821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.102848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.103063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.103294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.103320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.103499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.103751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.103778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.104034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.104273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.104303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.104541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.104763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.104792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.105028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.105257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.105282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.105484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.105708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.105734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.105934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.106147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.106173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.106383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.106610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.106635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.106864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.107083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.107109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.107333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.107581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.107621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.107879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.108078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.108105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.108343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.108580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.108605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.108868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.109115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.109146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.534 [2024-05-13 03:12:05.109369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.109622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.534 [2024-05-13 03:12:05.109648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.534 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.109888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.110092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.110122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.110395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.110610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.110636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.110840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.111056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.111082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.111410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.111637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.111667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.111905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.112173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.112202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.112475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.112714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.112744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.112977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.113227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.113252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.113520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.113773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.113804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.114054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.114296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.114322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.114573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.114817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.114847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.115092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.115313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.115338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.115517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.115746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.115773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.116042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.116351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.116380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.116657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.116929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.116960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.117270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.117523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.117548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.117872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.118223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.118295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.118738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.118990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.119020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.119287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.119478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.119505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.119809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.120029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.120056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.120315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.120751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.120781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.121017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.121316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.121343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.121625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.121858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.121885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.122114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.122326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.122353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.122617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.122844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.122870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.123118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.123577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.123627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.123899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.124284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.124335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.124657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.124942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.124969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.535 qpair failed and we were unable to recover it. 00:31:14.535 [2024-05-13 03:12:05.125298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.535 [2024-05-13 03:12:05.125762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.125792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.126028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.126446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.126498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.126818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.127077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.127106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.127304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.127541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.127571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.127834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.128075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.128106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.128362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.128601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.128626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.128932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.129255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.129284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.129518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.129736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.129762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.130001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.130206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.130231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.130483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.130702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.130732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.130942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.131377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.131437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.131687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.131988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.132017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.132336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.132602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.132627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.132886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.133181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.133239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.133488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.133673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.133715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.133963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.134338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.134388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.134655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.134905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.134935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.135176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.135414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.135440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.135668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.135924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.135965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.136218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.136419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.136444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.136665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.137298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.137343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.137592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.137846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.137874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.138108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.138349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.138379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.138616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.138877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.138904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.139132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.139367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.139398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.139631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.139895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.139925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.140191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.140444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.140484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.536 qpair failed and we were unable to recover it. 00:31:14.536 [2024-05-13 03:12:05.140740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.140987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.536 [2024-05-13 03:12:05.141028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.141252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.141637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.141708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.142102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.142476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.142503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.142818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.143080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.143126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.143404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.143663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.143714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.143976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.144215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.144241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.144482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.144731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.144759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.144981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.145428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.145478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.145776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.146004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.146034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.146270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.146547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.146572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.146777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.146982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.147013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.147277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.147663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.147717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.147970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.148156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.148181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.148403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.148686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.148728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.149002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.149249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.149290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.149573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.149816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.149846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.150114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.150385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.150409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.150640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.150937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.150963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.151216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.151564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.151614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.151852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.152080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.152106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.152391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.152601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.152631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.152893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.153212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.153262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.153503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.153768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.153797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.154067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.154286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.154312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.154613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.154885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.154915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.155164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.155646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.155706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.156011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.156260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.156289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.156529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.156767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.537 [2024-05-13 03:12:05.156797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.537 qpair failed and we were unable to recover it. 00:31:14.537 [2024-05-13 03:12:05.157041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.157246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.157276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.157474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.157683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.157728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.157955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.158203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.158243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.158517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.158815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.158872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.159161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.159391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.159427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.159643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.159901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.159926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.160208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.160399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.160423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.160682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.160958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.160988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.161254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.161466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.161491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.161745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.162048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.162076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.162354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.162603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.162642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.162930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.163301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.163325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.163623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.163888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.163919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.164248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.164550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.164588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.164831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.165117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.165141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.165377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.165655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.165680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.165935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.166397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.166448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.166687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.167020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.167046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.167322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.167792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.167821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.168122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.168441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.168481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.168745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.168986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.169027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.169261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.169545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.169570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.169878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.170186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.170211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.170440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.170639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.170667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.170940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.171273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.171303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.171531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.171799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.171826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.172108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.172565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.172619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.172860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.173318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.538 [2024-05-13 03:12:05.173369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.538 qpair failed and we were unable to recover it. 00:31:14.538 [2024-05-13 03:12:05.173631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.173874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.173900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.174193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.174638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.174688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.174974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.175218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.175248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.175477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.175728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.175755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.176071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.176431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.176459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.176708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.176944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.176978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.177328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.177596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.177621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.177893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.178146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.178176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.178433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.178813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.178843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.179066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.179306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.179332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.179611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.179872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.179899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.180162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.180483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.180546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.180817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.181253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.181304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.181515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.181749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.181778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.182044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.182516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.182564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.182887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.183328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.183380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.183659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.183972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.184024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.184306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.184543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.184568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.184815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.185126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.185150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.185416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.185651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.185680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.185973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.186361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.186414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.186660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.187028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.187074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.187367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.187642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.539 [2024-05-13 03:12:05.187672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.539 qpair failed and we were unable to recover it. 00:31:14.539 [2024-05-13 03:12:05.187955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.188388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.188438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.188754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.189034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.189064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.189347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.189632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.189657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.189949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.190219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.190248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.190567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.190827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.190858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.191117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.191378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.191402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.191679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.191956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.191981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.192236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.192565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.192597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.192876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.193144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.193169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.193466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.193880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.193909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.194158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.194419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.194444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.194763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.195202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.195253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.195492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.195731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.195758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.196011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.196298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.196341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.196598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.196879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.196904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.197211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.197529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.197557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.197840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.198127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.198152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.198463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.198732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.198762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.199035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.199470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.199531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.199765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.199987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.200023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.200324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.200562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.200602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.200870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.201145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.201170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.201396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.201596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.201621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.201842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.202108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.202134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.202417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.202648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.202676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.203062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.203388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.203412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.203707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.203977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.204012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.540 qpair failed and we were unable to recover it. 00:31:14.540 [2024-05-13 03:12:05.204354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.204659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.540 [2024-05-13 03:12:05.204724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.204965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.205249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.205274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.205529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.205770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.205800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.206038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.206326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.206389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.206711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.206964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.206994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.207251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.207450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.207475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.207736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.207983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.208034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.208292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.208536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.208576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.208821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.209051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.209077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.209378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.209588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.209617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.210091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.210451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.210479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.210761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.211004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.211034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.211274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.211739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.211769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.212021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.212240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.212266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.212583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.212846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.212876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.213126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.213501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.213550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.213825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.214013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.214040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.214279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.214481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.214507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.214902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.215234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.215285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.215524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.215745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.215797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.216108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.216389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.216413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.216671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.216912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.216943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.217225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.217659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.217717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.217924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.218160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.218190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.218459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.218691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.218729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.219027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.219279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.219315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.219558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.219805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.219832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.220071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.220319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.541 [2024-05-13 03:12:05.220345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.541 qpair failed and we were unable to recover it. 00:31:14.541 [2024-05-13 03:12:05.220561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.220883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.220934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.221260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.221508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.221537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.221784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.221986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.222015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.222322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.222641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.222671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.222948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.223143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.223169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.223419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.223724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.223766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.224251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.224681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.224751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.224979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.225223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.225254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.225497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.225768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.225799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.226050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.226245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.226271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.226486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.226759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.226800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.227086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.227398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.227437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.227762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.228083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.228133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.228399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.228761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.228791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.229031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.229511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.229561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.229827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.230090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.230116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.230403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.230641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.230671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.230958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.231194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.231220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.231455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.231712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.231753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.231977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.232302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.232355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.232593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.232843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.232885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.233159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.233419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.233462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.233709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.234002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.234028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.234317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.234523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.234554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.234847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.235150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.235212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.235492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.235738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.235768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.236038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.236382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.236423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.542 qpair failed and we were unable to recover it. 00:31:14.542 [2024-05-13 03:12:05.236673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.542 [2024-05-13 03:12:05.236915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.236945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.237189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.237645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.237708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.237955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.238211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.238250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.238479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.238700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.238742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.239005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.239285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.239311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.239540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.239789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.239816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.240030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.240309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.240334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.240572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.240790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.240817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.241038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.241289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.241315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.241564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.241786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.241814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.242058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.242269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.242295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.242574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.242825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.242852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.243114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.243362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.243389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.243631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.243862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.243889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.244106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.244335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.244378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.244633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.244849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.244875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.245094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.245406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.245458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.245704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.245993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.246038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.246381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.246820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.246860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.247174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.247625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.247681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.247933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.248198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.248248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.248530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.248788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.248816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.249045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.249436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.249487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.249786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.250056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.250097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.250358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.250788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.250830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.251056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.251270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.251310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.251586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.251828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.251869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.543 qpair failed and we were unable to recover it. 00:31:14.543 [2024-05-13 03:12:05.252200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.543 [2024-05-13 03:12:05.252414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.252439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.252772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.252956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.252995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.253282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.253556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.253604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.253847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.254120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.254146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.254497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.254796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.254831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.255047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.255272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.255299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.255507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.255762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.255789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.256025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.256253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.256295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.256560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.256842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.256869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.257099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.257345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.257372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.257631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.257907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.257933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.258178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.258406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.258452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.258704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.258936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.258962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f003c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.259260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.259520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.259579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.259836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.260078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.260111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.260391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.260674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.260742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.260944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.261210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.261238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.261560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.261843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.261870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.262098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.262339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.262367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.262687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.262909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.262934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.263208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.263499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.263552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.263814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.264037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.264077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.544 qpair failed and we were unable to recover it. 00:31:14.544 [2024-05-13 03:12:05.264345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.544 [2024-05-13 03:12:05.264621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.264666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.264891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.265165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.265193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.265479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.265734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.265765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.265947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.266189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.266217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.266474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.266713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.266757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.266962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.267234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.267262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.267657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.267922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.267948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.268216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.268506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.268534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.268759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.268973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.269019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.269265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.269508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.269552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.269805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.270059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.270087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.270364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.270637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.270661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.270898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.271149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.271178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.271438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.271684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.271724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.271936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.272148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.272177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.272421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.272664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.272705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.272932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.273170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.273199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.273438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.273658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.273702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.273902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.274154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.274183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.274456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.274713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.274742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.274985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.275224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.275253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.275519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.275783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.275812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.276053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.276291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.276320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.276741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.276957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.276987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.277231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.277470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.277498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.277715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.277935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.277961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.278238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.278452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.278490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.545 qpair failed and we were unable to recover it. 00:31:14.545 [2024-05-13 03:12:05.278740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.278961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.545 [2024-05-13 03:12:05.278992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.279267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.279567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.279614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.279870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.280095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.280123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.280391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.280673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.280708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.280961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.281326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.281372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.281641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.281891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.281920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.282162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.282408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.282437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.282679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.282926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.282955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.283208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.283445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.283470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.283743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.283968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.284006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.284233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.284532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.284566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.284792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.285015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.285043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.285322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.285590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.285618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.285846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.286060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.286088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.286303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.286521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.286547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.286782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.286992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.287023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.287261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.287514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.287540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.287829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.288028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.288053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.288274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.288549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.288577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.288818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.289015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.289041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.289273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.289490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.289519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.289803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.290009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.290037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.290246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.290484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.290513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.290786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.291003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.291032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.291243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.291554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.291599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.291859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.292056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.292082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.292329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.292555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.292585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.292830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.293043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.293072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.546 qpair failed and we were unable to recover it. 00:31:14.546 [2024-05-13 03:12:05.293320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.546 [2024-05-13 03:12:05.293561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.293587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.293782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.293991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.294018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f004c000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.294293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.294607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.294664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.294887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.295086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.295112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.295386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.295661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.295708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.295932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.296174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.296217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.296467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.296674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.296718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.296940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.297225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.297268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.297535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.297813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.297858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.298089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.298387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.298431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.298650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.298892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.298938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.299206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.299497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.299542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.299763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.299989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.300032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.300291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.300574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.300619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.300848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.301076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.301120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.301359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.301632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.301658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.301897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.302183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.302229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.302502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.302743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.302769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.303001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.303296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.303341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.303597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.303827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.303854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.304074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.304378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.304407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.304654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.304856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.304883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.305070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.305323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.305368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.305588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.305814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.305859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.306093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.306396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.306448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.306668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.306918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.306965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.307222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.307491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.307535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.547 qpair failed and we were unable to recover it. 00:31:14.547 [2024-05-13 03:12:05.307783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.308001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.547 [2024-05-13 03:12:05.308045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.308318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.308649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.308700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.308895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.309112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.309155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.309441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.309701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.309738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.309937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.310188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.310232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.310478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.310723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.310750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.310945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.311224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.311267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.311542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.311784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.311812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.312040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.312358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.312401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.312599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.312839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.312866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.313090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.313418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.313468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.313706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.313913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.313949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.314164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.314480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.314523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.314793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.315021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.315065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.315403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.315671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.315702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.315907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.316168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.316212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.316421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.316673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.316718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.316916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.317158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.317202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.317430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.317677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.317709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.317940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.318185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.318227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.318453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.318764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.318791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.319020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.319328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.319357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.319626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.319827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.319854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.320087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.320322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.320352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.320592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.320809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.320836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.321052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.321302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.548 [2024-05-13 03:12:05.321347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.548 qpair failed and we were unable to recover it. 00:31:14.548 [2024-05-13 03:12:05.321543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.321767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.321794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.322041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.322348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.322394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.322617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.322837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.322863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.323121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.323359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.323403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.323638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.323819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.323846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.324073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.324364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.324393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.324656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.324882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.324909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.325154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.325437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.325484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.325708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.325904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.325931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.326209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.326449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.326478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.326721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.326943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.326970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.327213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.327506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.327534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.327771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.327989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.328033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.328242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.328484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.328528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.328722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.328956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.328982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.329260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.329525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.329570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.329789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.330055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.330105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.330361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.330576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.330602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.330873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.331108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.331154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.331388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.331621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.331647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.817 [2024-05-13 03:12:05.331897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.332168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.817 [2024-05-13 03:12:05.332211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.817 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.332456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.332690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.332726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.332942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.333197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.333240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.333487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.333752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.333779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.334024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.334286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.334329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.334587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.334826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.334856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.335102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.335396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.335439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.335640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.335860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.335886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.336092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.336362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.336405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.336614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.336864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.336891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.337154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.337424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.337468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.337693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.337898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.337924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.338180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.338437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.338480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.338706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.338905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.338931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.339177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.339439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.339483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.339709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.339992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.340030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.340283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.340545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.340572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.340783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.341055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.341099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.341343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.341595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.341639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.341839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.342068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.342113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.342357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.342567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.342595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.342871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.343140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.343167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.343436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.343644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.343671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.343894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.344148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.344195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.344466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.344704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.344741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.344962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.345180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.345231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.345467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.345725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.345752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.346002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.346247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.346291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.346517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.346740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.346767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.346983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.347287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.818 [2024-05-13 03:12:05.347330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.818 qpair failed and we were unable to recover it. 00:31:14.818 [2024-05-13 03:12:05.347570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.347839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.347865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.348061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.348316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.348360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.348597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.348882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.348927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.349178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.349435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.349479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.349709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.349949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.349976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.350216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.350513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.350560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.350779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.351002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.351045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.351320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.351579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.351622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.351836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.352075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.352120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.352338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.352595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.352639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.352863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.353087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.353131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.353397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.353662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.353688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.353917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.354156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.354199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.354407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.354669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.354711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.354915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.355165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.355209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.355479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.355773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.355801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.356006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.356250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.356294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.356595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.356824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.356850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.357078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.357322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.357366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.357586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.357804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.357832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.358033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.358254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.358280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.358524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.358757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.358784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.359004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.359292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.359341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.359589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.359821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.359848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.360097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.360387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.360430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.360674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.361496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.361526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.361801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.362024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.362073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.362363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.362664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.362690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.362911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.363159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.819 [2024-05-13 03:12:05.363202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.819 qpair failed and we were unable to recover it. 00:31:14.819 [2024-05-13 03:12:05.363479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.363715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.363742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.363943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.364241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.364285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.364552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.364842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.364869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.365117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.365435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.365463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.365671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.365900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.365945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.366217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.366498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.366544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.366764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.367022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.367067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.367296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.367502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.367528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.367794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.368022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.368068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.368323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.368554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.368605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.368863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.369078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.369122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.369395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.369624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.369656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.369907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.370129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.370156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.370378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.370633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.370659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.370883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.371103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.371150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.371374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.371598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.371624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.371878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.372114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.372157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.372397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.372640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.372667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.372922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.373221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.373251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.373528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.373823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.373854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.374122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.374414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.374458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.374688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.374887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.374913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.375146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.375409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.375453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.375686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.375895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.375922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.376174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.376469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.376512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.376766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.376982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.377027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.377298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.377620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.377673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.377926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.378140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.378183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.820 [2024-05-13 03:12:05.378441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.378709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.820 [2024-05-13 03:12:05.378736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.820 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.378929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.379213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.379256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.379496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.379825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.379863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.380123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.380506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.380554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.380808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.381021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.381064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.381339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.381627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.381671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.381894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.382112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.382158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.382369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.382599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.382626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.382892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.383130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.383159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.383449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.383795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.383821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.384069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.384420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.384467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.384720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.384929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.384957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.385254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.385544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.385588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.385839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.386065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.386110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.386362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.386586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.386614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.386848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.387102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.387147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.387532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.387837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.387864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.388151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.388496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.388544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.388807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.389102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.389149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.389426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.389650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.389692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.389927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.390183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.390227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.390444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.390667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.390714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.390942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.391190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.391234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.391469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.391718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.391755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.821 qpair failed and we were unable to recover it. 00:31:14.821 [2024-05-13 03:12:05.391994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.821 [2024-05-13 03:12:05.392268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.392312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.392545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.392833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.392860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.393083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.393353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.393398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.393612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.393838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.393865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.394147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.394422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.394449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.394707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.394952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.394993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.395314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.395715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.395780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.395995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.396238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.396283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.396571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.396929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.396956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.397278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.397672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.397759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.397994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.398276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.398320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.398560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.398819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.398846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.399096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.399468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.399516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.399792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.400043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.400073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.400373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.400689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.400725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.400960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.401246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.401291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.401582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.401876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.401903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.402181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.402472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.402501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.402716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.402977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.403004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.403278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.403597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.403643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.403869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.404148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.404192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.404479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.404782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.404809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.405061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.405355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.405402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.405653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.405972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.406014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.406261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.406509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.406556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.406831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.407179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.407230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.407506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.407859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.407886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.408206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.408514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.408556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.408765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.409004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.409030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.409273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.409567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.822 [2024-05-13 03:12:05.409593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.822 qpair failed and we were unable to recover it. 00:31:14.822 [2024-05-13 03:12:05.409792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.409991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.410017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.410277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.410545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.410590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.410848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.411168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.411211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.411492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.411857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.411899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.412142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.412411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.412455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.412755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.413042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.413085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.413366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.413719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.413761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.414057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.414330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.414374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.414661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.414947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.414973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.415224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.415546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.415590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.415799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.416117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.416162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.416407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.416671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.416720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.417094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.417394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.417437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.417679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.417880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.417907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.418119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.418439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.418481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.418738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.419170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.419233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.419488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.419797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.419842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.420059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.420269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.420313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.420577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.420834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.420863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.421114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.421427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.421476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.421797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.422118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.422166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.422449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.422702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.422744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.422971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.423195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.423239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.423556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.423808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.423834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.424122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.424410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.424454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.424671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.424962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.424993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.425239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.425522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.425573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.425814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.426107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.426152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.426395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.426718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.426746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.823 [2024-05-13 03:12:05.426956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.427268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.823 [2024-05-13 03:12:05.427313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.823 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.427547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.427775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.427802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.428056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.428378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.428422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.428664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.428902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.428947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.429184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.429451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.429497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.429710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.429957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.429985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.430234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.430525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.430574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.430850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.431151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.431195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.431391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.431622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.431649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.431942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.432272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.432306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.432607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.432955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.433000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.433276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.433583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.433627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.433940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.434291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.434336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.434583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.434804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.434831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.435119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.435450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.435494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.435676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.435964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.435992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.436301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.436610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.436641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.436935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.437265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.437295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.437553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.437871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.437897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.438145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.438498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.438531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.438769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.439046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.439090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.439390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.439708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.439750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.440084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.440349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.440394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.440669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.440906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.440933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.441212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.441493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.441538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.441769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.442023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.442069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.442456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.442718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.442752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.442978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.443300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.443331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.443613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.443846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.443873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.444151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.444488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.444534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.824 qpair failed and we were unable to recover it. 00:31:14.824 [2024-05-13 03:12:05.444797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.824 [2024-05-13 03:12:05.445053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.445097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.445394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.445656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.445703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.445953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.446228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.446272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.446531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.446792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.446819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.447092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.447425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.447484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.447714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.447985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.448028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.448300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.448619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.448670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.448911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.449190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.449233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.449528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.449826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.449853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.450117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.450373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.450417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.450658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.450870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.450902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.451124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.451389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.451433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.451674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.451897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.451924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.452201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.452427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.452474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.452656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.452887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.452913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.453332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.453691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.453759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.454001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.454275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.454320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.454710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.454926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.454953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.455278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.455549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.455593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.455799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.456035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.456061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.456514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.456780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.456818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.457077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.457402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.457448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.457639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.457917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.457943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.458223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.458683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.458765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.459013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.459474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.459527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.459785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.460012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.460055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.460338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.460642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.460686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.460961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.461300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.461343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.461612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.461815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.461842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.825 [2024-05-13 03:12:05.462129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.462485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.825 [2024-05-13 03:12:05.462523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.825 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.462788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.463037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.463082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.463401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.463660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.463686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.463953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.464283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.464326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.464735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.465034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.465061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.465350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.465665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.465703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.465961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.466248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.466293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.466596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.466840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.466867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.467094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.467404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.467433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.467705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.467919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.467958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.468211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.468547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.468599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.468826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.469044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.469087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.469337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.469656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.469712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.469968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.470236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.470279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.470559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.470884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.470910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.471193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.471455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.471496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.471675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.471929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.471956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.472264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.472685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.472717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.472960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.473236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.473284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.473537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.473783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.473811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.474067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.474315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.474361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.474640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.474946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.475002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.475241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.475672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.475732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.475934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.476364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.476409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.476676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.476966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.476994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.826 [2024-05-13 03:12:05.477218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.477504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.826 [2024-05-13 03:12:05.477534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.826 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.477785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.478023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.478066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.478349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.478681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.478738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.478986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.479313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.479356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.479628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.479892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.479919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.480151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.480429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.480475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.480693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.480975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.481002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.481334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.481718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.481760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.481968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.482185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.482227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.482501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.482756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.482781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.483132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.483410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.483454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.483810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.484201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.484259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.484576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.484849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.484877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.485144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.485428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.485463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.485759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.486005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.486047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.486331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.486570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.486599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.486871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.487103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.487146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.487424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.487671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.487705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.487959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.488252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.488296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.488656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.488971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.489013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.489369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.489816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.489841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.490061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.490312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.490355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.490630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.490915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.490942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.491338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.491821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.491847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.492130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.492385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.492432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.492714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.492947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.492973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.493261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.493576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.493627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.493891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.494265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.494323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.494588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.494874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.494901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.495131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.495443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.495492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.827 qpair failed and we were unable to recover it. 00:31:14.827 [2024-05-13 03:12:05.495749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.827 [2024-05-13 03:12:05.495989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.496033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.496343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.496586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.496613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.496855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.497098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.497143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.497437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.497831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.497858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.498145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.498434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.498461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.498782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.499044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.499088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.499375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.499640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.499668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.499940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.500322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.500370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.500627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.500902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.500930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.501151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.501571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.501619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.501895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.502166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.502218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.502562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.502852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.502879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.503152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.503402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.503449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.503690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.504010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.504054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.504341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.504623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.504668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.504880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.505185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.505214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.505481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.505737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.505764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.505977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.506297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.506341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.506550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.506804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.506846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.507108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.507389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.507434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.507674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.507973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.507999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.508266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.508573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.508617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.509032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.509536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.509586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.509831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.510127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.510171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.510439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.510821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.510847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.511112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.511388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.511439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.828 [2024-05-13 03:12:05.511766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.512093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.828 [2024-05-13 03:12:05.512121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.828 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.512409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.512801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.512828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.513068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.513407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.513452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.513682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.513929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.513956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.514201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.514463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.514508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.514760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.514996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.515023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.515250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.515558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.515603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.515885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.516287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.516345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.516621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.516854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.516882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.517208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.517567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.517619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.517836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.518052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.518096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.518385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.518626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.518652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.518910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.519306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.519366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.519624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.519912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.519939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.520197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.520456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.520499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.520693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.520945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.520972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.521370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.521718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.521761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.522156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.522545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.522599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.522851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.523102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.523146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.523535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.523784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.523820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.524104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.524337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.524381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.524633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.524897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.524924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.525194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.525458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.525502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.525785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.526023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.526049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.526302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.526576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.526622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.526948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.527197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.527242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.527492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.527810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.527851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.528129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.528394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.528443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.528724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.528926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.528951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.829 qpair failed and we were unable to recover it. 00:31:14.829 [2024-05-13 03:12:05.529189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.829 [2024-05-13 03:12:05.529670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.529724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.529947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.530165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.530215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.530506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.530788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.530815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.531060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.531435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.531482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.531737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.532008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.532033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.532365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.532618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.532661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.532994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.533267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.533314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.533537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.533872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.533899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.534184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.534505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.534554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.534808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.535062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.535089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.535384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.535726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.535753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.536031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.536416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.536463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.536708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.536912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.536938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.537251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.537574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.537618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.537890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.538187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.538232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.538523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.538871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.538911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.539209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.539744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.539770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.539988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.540224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.540269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.540528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.540821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.540847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.541205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.541531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.541577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.541858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.542311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.542359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.542630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.543050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.543110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.543456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.543718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.543746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.544001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.544483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.544533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.830 qpair failed and we were unable to recover it. 00:31:14.830 [2024-05-13 03:12:05.544776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.545190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.830 [2024-05-13 03:12:05.545235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.545511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.545738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.545764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.545967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.546209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.546253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.546503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.546758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.546799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.547060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.547269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.547296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.547585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.547870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.547898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.548236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.548575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.548618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.548944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.549446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.549496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.549760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.550036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.550081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.550323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.550576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.550618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.550926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.551264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.551312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.551547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.551791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.551836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.552125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.552442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.552485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.552735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.552961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.552987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.553242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.553587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.553635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.553878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.554142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.554187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.554519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.554753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.554780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.555039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.555347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.555391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.555637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.555864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.555892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.556168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.556589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.556640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.556941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.557209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.557252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.557535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.557791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.557835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.558165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.558442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.558486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.558739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.559100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.559156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.559412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.559727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.559754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.560033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.560454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.560502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.560787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.561072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.561098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.561375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.561788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.561813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.831 qpair failed and we were unable to recover it. 00:31:14.831 [2024-05-13 03:12:05.562101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.831 [2024-05-13 03:12:05.562483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.562528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.562773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.563017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.563043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.563344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.563608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.563651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.563915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.564190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.564234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.564510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.564732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.564770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.565031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.565299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.565345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.565776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.565985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.566011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.566223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.566519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.566563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.566873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.567168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.567219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.567463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.567721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.567747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.567993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.568229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.568255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.568545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.568888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.568914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.569167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.569657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.569720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.570013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.570329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.570372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.570653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.570874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.570919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.571179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.571490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.571543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.571841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.572183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.572230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.572513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.572760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.572786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.573104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.573401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.573428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.573664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.573963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.573991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.574261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.574762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.574788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.574961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.575212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.575256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.575506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.575726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.575753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.576043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.576489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.576539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.576756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.577156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.577219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.577457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.577733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.577761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.578057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.578359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.578406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.832 qpair failed and we were unable to recover it. 00:31:14.832 [2024-05-13 03:12:05.578732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.832 [2024-05-13 03:12:05.578992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.579036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.579290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.579550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.579595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.579884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.580126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.580169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.580716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.580979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.581005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.581294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.581582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.581627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.581868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.582157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.582201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.582497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.582711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.582737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.583012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.583393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.583443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.583745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.584050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.584077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.584402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.584855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.584881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.585104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.585393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.585437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.585659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.585907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.585935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.586181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.586558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.586607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.586857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.587105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.587148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.587437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.587761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.587804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.588170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.588428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.588475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.588761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.588991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.589035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.589272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.589542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.589587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.589857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.590149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.590194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.590491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.590773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.590816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.591116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.591359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.591404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.591746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.592036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.592091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.592365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.592624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-05-13 03:12:05.592666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.833 qpair failed and we were unable to recover it. 00:31:14.833 [2024-05-13 03:12:05.592939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.593420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.593477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.593727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.593970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.593996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.594273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.594552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.594597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.594843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.595118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.595147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.595511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.595843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.595870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.596187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.596644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.596703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.596956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.597175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.597220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.597487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.597775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.597818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.598137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.598424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.598472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.598692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.598903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.598930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.599167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.599423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.599468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.599740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.599998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.600042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.600569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.600805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.600831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.601074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.601356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.601402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.601666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.601912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.601939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.602215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.602468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.602512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.602753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.603085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.603131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.603388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.603643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.603684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.603985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.604328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.604375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.604649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.604917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.604944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.605167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.605430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.605474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.605673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.605904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.605932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.606182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.606421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.606465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.606707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.606919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.606946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.607189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.607428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.607472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.607733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.607957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.607999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:14.834 [2024-05-13 03:12:05.608253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.608498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.834 [2024-05-13 03:12:05.608543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:14.834 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.608796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.609255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.609312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.609607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.609881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.609910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.610151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.610423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.610467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.610703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.610879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.610905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.611219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.611503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.611548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.611764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.611991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.612036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.612260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.612506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.612534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.612799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.613090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.613135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.613407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.613637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.613664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.613894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.614172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.614218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.614492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.614722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.614753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.615027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.615516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.615568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.615804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.616042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.616086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.616360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.616637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.616664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.616923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.617347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.617398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.617620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.617832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.617859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.618153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.618454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.618503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.618716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.618993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.619037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.619288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.619534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.619579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.619856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.620238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.620301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.620550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.620819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.620868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.621147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.621387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.621418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.621674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.621946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.621973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.622214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.622511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.622556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.097 [2024-05-13 03:12:05.622798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.623049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.097 [2024-05-13 03:12:05.623093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.097 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.623358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.623561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.623587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.623834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.624301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.624352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.624642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.624936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.624963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.625202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.625492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.625540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.625775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.626004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.626049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.626272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.626538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.626588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.626835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.627091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.627135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.627419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.627650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.627693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.627987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.628462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.628510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.628798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.629217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.629269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.629519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.629822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.629863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.630136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.630461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.630504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.630755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.630949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.630977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.631197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.631498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.631544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.631787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.632012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.632040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.632304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.632561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.632608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.632863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.633276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.633323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.633573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.633865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.633893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.634188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.634420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.634462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.634680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.634909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.634936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.635194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.635463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.635509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.635845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.636276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.636328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.636605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.636850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.636878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.637120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.637536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.637589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.637841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.638091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.638121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.638418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.638707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.638734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.638956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.639204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.639247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.639496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.639758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.639787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.640002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.640293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.640336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.640610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.640856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.640884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.641144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.641590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.641638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.641886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.642102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.642146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.642399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.642659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.642686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.642931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.643171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.643217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.643456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.643664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.643714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.643944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.644176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.644206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.644548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.644778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.644805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.645045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.645271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.645314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.645602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.645866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.645909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.646178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.646567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.646611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.646879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.647172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.647216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.647477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.647817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.647845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.648070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.648400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.648443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.648669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.648922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.648967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.649336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.649597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.649624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.649862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.650128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.650171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.650402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.650682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.650717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.650937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.651193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.651236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.651738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.651986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.652012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.652267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.652602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.098 [2024-05-13 03:12:05.652646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.098 qpair failed and we were unable to recover it. 00:31:15.098 [2024-05-13 03:12:05.652899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.653142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.653186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.653420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.653688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.653736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.653971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.654217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.654262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.654476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.654839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.654865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.655122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.655448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.655491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.655823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.656032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.656059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.656353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.656670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.656721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.656983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.657232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.657279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.657525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.657812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.657840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.658056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.658313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.658357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.658597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.658897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.658924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.659271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.659555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.659600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.659927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.660333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.660388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.660664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.660948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.660974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.661230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.661489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.661534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.661742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.662019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.662045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.662346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.662734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.662796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.663082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.663344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.663387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.663783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.663995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.664023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.664244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.664468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.664512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.664755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.664997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.665023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.665401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.665685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.665719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.665978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.666405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.666464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.666730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.667118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.667171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.667456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.667670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.667707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.667901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.668148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.668192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.668469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.668722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.668760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.668966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.669223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.669267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.669547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.669790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.669818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.670068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.670331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.670373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.670754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.671002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.671028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.671287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.671629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.671672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.671911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.672186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.672230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.672501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.672796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.672823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.673051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.673363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.673390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.673649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.673941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.673988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.674332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.674814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.674841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.675090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.675382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.675426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.675683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.675970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.676000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.676365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.676648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.676694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.676934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.677174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.677220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.677463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.677733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.677761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.678018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.678261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.678291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.678551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.678779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.678806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.679057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.679489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.679539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.679788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.680033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.680076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.680448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.680732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.680760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.680961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.681189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.681215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.681461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.681719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.099 [2024-05-13 03:12:05.681761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.099 qpair failed and we were unable to recover it. 00:31:15.099 [2024-05-13 03:12:05.682015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.682486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.682536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.682822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.683070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.683110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.683366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.683585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.683611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.683837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.684089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.684116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.684382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.684594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.684620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.684911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.685244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.685287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.685545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.685868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.685915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.686201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.686434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.686477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.686779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.687092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.687136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.687358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.687701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.687727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.687981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.688394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.688445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.688707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.688929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.688956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.689249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.689538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.689583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.689830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.690191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.690246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.690536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.690825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.690852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.691125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.691493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.691537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.691868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.692269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.692318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.692763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.692998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.693042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.693276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.693514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.693555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.693852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.694143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.694187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.694459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.694780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.694807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.695085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.695409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.695452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.695684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.695912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.695939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.696214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.696510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.696554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.696827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.697303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.697354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.697763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.698063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.698089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.698404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.698777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.698804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.699080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.699431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.699476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.699681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.699906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.699952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.700239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.700681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.700739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.700984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.701393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.701445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.701706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.701932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.701959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.702192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.702475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.702523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.702750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.703090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.703141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.703394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.703713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.703757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.703995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.704275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.704319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.704562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.704854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.704881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.705161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.705431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.705474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.705759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.706007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.706049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.706305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.706552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.706595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.706844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.707080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.707124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.707499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.707731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.707757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.707978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.708270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.708314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.708567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.708804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.708832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.709039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.709330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.709373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.709622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.709847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.709875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.710109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.710634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.710685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.100 [2024-05-13 03:12:05.711030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.711321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.100 [2024-05-13 03:12:05.711372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.100 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.711650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.711942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.711970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.712350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.712778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.712805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.713076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.713597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.713649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.713917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.714164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.714209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.714498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.714763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.714792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.715089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.715619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.715672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.715935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.716156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.716205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.716456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.716810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.716852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.717124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.717358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.717387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.717659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.718088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.718152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.718485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.718839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.718866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.719156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.719423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.719466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.719746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.720009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.720036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.720270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.720728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.720796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.721021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.721422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.721474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.721712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.722002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.722029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.722263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.722540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.722583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.722876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.723479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.723532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.723766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.723988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.724015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.724326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.724591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.724640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.724915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.725166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.725209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.725494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.725809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.725851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.726137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.726404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.726448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.726704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.726897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.726924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.727241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.727749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.727774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.728181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.728526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.728571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.728834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.729088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.729131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.729459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.729713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.729740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.730024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.730321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.730363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.730690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.730968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.730999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.731230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.731493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.731538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.731814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.732257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.732313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.732563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.732871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.732913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.733323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.733568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.733610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.733904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.734319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.734373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.734624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.734909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.734935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.735220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.735741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.735767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.736073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.736299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.736342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.736638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.736927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.736953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.737244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.737756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.737781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.738095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.738384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.738414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.738882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.739089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.739117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.739404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.739766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.739794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.740040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.740314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.740360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.740603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.740901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.740929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.741200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.741441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.741496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.741791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.742016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.742060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.742410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.742720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.742759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.742974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.743261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.743318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.743578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.743817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.743844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.744077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.744335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.744379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.744613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.744804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.744832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.745057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.745354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.745397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.745634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.745867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.745894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.101 qpair failed and we were unable to recover it. 00:31:15.101 [2024-05-13 03:12:05.746111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.101 [2024-05-13 03:12:05.746446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.746515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.746833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.747187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.747231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.747509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.747786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.747813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.748035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.748366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.748410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.748648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.748894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.748921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.749169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.749464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.749495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.749850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.750069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.750113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.750577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.750897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.750924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.751183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.751692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.751749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.751967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.752235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.752278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.752545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.752767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.752794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.753043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.753434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.753481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.753713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.753934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.753972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.754209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.754492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.754518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.754787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.755079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.755121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.755510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.755795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.755820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.756055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.756553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.756605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.756839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.757294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.757344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.757590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.757809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.757834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.758123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.758415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.758476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.758731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.758986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.759027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.759291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.759633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.759676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.759984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.760375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.760424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.760740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.761008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.761050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.761275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.761529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.761555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.761786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.762043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.762088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.762335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.762537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.762564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.762830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.763099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.763142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.763429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.763742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.763768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.764015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.764417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.764460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.764639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.764892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.764933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.765340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.765791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.765817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.766112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.766392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.766436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.766708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.766970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.766996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.767256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.767527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.767572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.767848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.768095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.768139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.768382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.768643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.768684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.768956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.769207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.769251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.769520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.769827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.769872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.770104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.770367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.770410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.770631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.770903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.770948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.771252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.771509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.771552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.771906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.772162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.772204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.772674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.773091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.773148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.773421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.773683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.773732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.773985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.774278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.774307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.774604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.775066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.775123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.775618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.775867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.102 [2024-05-13 03:12:05.775895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.102 qpair failed and we were unable to recover it. 00:31:15.102 [2024-05-13 03:12:05.776137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.776673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.776732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.777018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.777434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.777464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.777778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.778050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.778076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.778411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.778659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.778705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.778934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.779159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.779205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.779497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.779727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.779755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.780040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.780468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.780517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.780753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.780965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.780992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.781266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.781580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.781607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.781922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.782239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.782283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.782555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.783028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.783088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.783480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.783716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.783744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.784071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.784398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.784441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.784687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.784967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.784994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.785286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.785815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.785841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.786166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.786667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.786723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.787023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.787398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.787452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.787685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.788087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.788144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.788413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.788667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.788724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.788957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.789244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.789289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.789589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.789828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.789857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.790080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.790377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.790420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.790746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.791022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.791048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.791508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.791798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.791826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.792110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.792583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.792632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.792840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.793086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.793130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.793417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.793642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.793667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.793945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.794557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.794608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.794841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.795212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.795271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.795553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.795870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.795897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.796165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.796407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.796438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.796711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.796964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.797005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.797305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.797626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.797653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.797950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.798400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.798462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.798708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.798994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.799022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.799519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.799857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.799899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.800245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.800569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.800613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.800807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.801036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.801080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.801361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.801669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.801701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.801965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.802212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.802256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.802540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.802769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.802797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.803023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.803300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.803344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.803603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.803844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.803872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.804256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.804550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.804577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.804882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.805230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.805292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.805509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.805920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.805947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.806205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.806459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.806489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.806704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.806930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.806957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.807208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.807533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.807563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.807818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.808097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.808141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.808381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.808738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.808764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.809007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.809248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.809278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.809582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.809824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.103 [2024-05-13 03:12:05.809852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.103 qpair failed and we were unable to recover it. 00:31:15.103 [2024-05-13 03:12:05.810061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.810517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.810569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.810851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.811210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.811263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.811535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.811796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.811823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.812081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.812519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.812587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.812914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.813341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.813393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.813659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.813951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.813983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.814203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.814471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.814516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.814765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.814994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.815038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.815277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.815539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.815583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.815832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.816090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.816134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.816414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.816673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.816721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.816947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.817203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.817245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 485882 Killed "${NVMF_APP[@]}" "$@" 00:31:15.104 [2024-05-13 03:12:05.817568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.817866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.817897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:31:15.104 [2024-05-13 03:12:05.818210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:15.104 [2024-05-13 03:12:05.818602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.818646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:15.104 [2024-05-13 03:12:05.818880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:15.104 [2024-05-13 03:12:05.819158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.819211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.104 [2024-05-13 03:12:05.819485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.819801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.819828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.820121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.820518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.820568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.820831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.821089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.821141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.821623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.821937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.821964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.822279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.822691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.822739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.823022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.823387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.823449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.823707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.823948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.823975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=486455 00:31:15.104 [2024-05-13 03:12:05.824339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 486455 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 486455 ']' 00:31:15.104 [2024-05-13 03:12:05.824760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.824801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:15.104 [2024-05-13 03:12:05.825053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:15.104 [2024-05-13 03:12:05.825376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.825433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 03:12:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.104 [2024-05-13 03:12:05.825717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.826132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.826160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.826440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.826806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.826840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.827061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.827540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.827590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.827862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.828134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.828187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.828491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.828797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.828824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.829066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.829352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.829395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.829668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.829902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.829934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.830215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.830476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.830519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.830776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.831064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.831114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.831449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.831706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.831734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.831963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.832299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.832350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.832598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.832844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.832871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.833146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.833434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.833479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.833690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.833927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.833953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.834269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.834525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.834568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.834786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.835041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.835084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.835335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.835591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.835643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.835873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.836159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.836203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.836535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.836814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.836841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.837098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.837397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.837442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.837675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.837937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.837981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.838256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.838565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.838591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.838889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.839159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.839185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.839469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.839711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.839739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.104 qpair failed and we were unable to recover it. 00:31:15.104 [2024-05-13 03:12:05.839970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.104 [2024-05-13 03:12:05.840221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.840268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.840509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.840739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.840766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.840993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.841256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.841304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.841580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.841849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.841877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.842237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.842528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.842574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.842857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.843126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.843170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.843478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.843745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.843771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.843998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.844242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.844287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.844534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.844772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.844799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.845023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.845290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.845334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.845592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.845884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.845910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.846130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.846463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.846506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.846738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.846977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.847010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.847250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.847509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.847552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.847854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.848125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.848152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.848435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.848714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.848741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.848957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.849180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.849223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.849469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.849710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.849737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.849938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.850273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.850316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.850552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.850790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.850816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.851088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.851343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.851387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.851588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.851895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.851924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.852259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.852467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.852493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.852721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.852952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.852979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.853292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.853575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.853619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.853875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.854108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.854153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.854381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.854614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.854641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.854880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.855150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.855193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.855452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.855694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.855741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.855970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.856212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.856238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.856513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.856748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.856774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.857041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.857258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.857284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.857504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.857810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.857837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.858091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.858296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.858322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.858546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.858792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.858819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.859006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.859214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.859240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.859429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.859680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.859713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.859959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.860176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.860201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.860464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.860652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.860678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.860914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.861175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.861202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.861420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.861644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.861670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.861922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.862151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.862176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.862412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.862692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.862724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.862957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.863170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.863197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.863394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.863728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.863755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.863970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.864201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.864226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.105 [2024-05-13 03:12:05.864471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.864682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.105 [2024-05-13 03:12:05.864715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.105 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.864957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.865189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.865215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.865439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.865670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.865702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.865954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.866179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.866205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.866397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.866635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.866660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.866901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.867122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.867148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.867427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.867641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.867667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.867931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.868182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.868207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.868513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.868804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.868831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.869071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.869288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.869314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.869599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.869795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.869822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.870051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.870288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.870314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.870516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.870732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.870759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.870974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.871277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.871303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.871551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.871769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.871796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.872015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.872308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.872334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.872580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.872796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.872823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.873060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.873262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.873287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.873504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.873746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.873773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.874005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.874243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.874269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.874524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.874799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.874826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.875090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.875077] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:31:15.106 [2024-05-13 03:12:05.875153] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.106 [2024-05-13 03:12:05.875302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.875327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.875555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.875928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.875955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.876186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.876410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.876436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.876642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.876901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.876928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.877132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.877428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.877454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.877716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.877938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.877964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.878239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.878423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.878450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.878708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.878903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.878931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.879276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.879496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.879522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.879783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.880047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.880073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.880323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.880558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.880585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.880885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.881064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.881089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.881340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.881557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.881584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.881872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.882073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.882099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.882314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.882524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.882550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.882791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.883116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.883157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.883405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.883633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.883659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.883960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.884182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.884210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.884488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.884821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.884848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.885105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.885316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.885343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.885580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.885811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.885838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.886034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.886257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.886283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.886554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.886790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.886817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.887037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.887436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.887461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.887724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.887954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.887980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.888194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.888444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.888485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.888790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.889068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.889093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.889302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.889562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.889588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.889863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.890070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.890096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.890291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.890569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.890593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.106 qpair failed and we were unable to recover it. 00:31:15.106 [2024-05-13 03:12:05.890869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.106 [2024-05-13 03:12:05.891075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.891101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.107 qpair failed and we were unable to recover it. 00:31:15.107 [2024-05-13 03:12:05.891322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.891565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.891592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.107 qpair failed and we were unable to recover it. 00:31:15.107 [2024-05-13 03:12:05.891790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.892006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.892033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.107 qpair failed and we were unable to recover it. 00:31:15.107 [2024-05-13 03:12:05.892259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.892498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.892524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.107 qpair failed and we were unable to recover it. 00:31:15.107 [2024-05-13 03:12:05.892756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.893032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.893058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.107 qpair failed and we were unable to recover it. 00:31:15.107 [2024-05-13 03:12:05.893270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.893460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.893487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.107 qpair failed and we were unable to recover it. 00:31:15.107 [2024-05-13 03:12:05.893721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.893907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.107 [2024-05-13 03:12:05.893933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.107 qpair failed and we were unable to recover it. 00:31:15.107 [2024-05-13 03:12:05.894137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.894354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.894380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.894625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.894851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.894879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.895141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.895373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.895400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.895593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.895778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.895805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.896112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.896310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.896336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.896576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.896888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.896915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.897133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.897361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.897389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.897728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.897979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.898020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.898249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.898481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.898507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.898741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.898984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.899010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.899215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.899389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.899414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.899641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.899853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.899880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.900095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.900339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.900363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.900581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.900850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.900876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.901115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.901355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.901381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.901604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.901786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.901813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.902034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.902256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.902283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.902516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.902732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.902759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.902979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.903183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.903212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.903415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.903693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.903725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.903970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.904222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.904248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.904474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.904771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.904797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.905027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.905252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.905278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.905558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.905775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.905801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.905994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.906249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.906276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.906580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.906828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.906853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.907069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.907322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.907347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.907543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.907784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.907812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.908028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.908268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.908312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.908501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.908732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.908760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.908970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.909225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.909252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.909518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.909829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.909857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.910099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.910293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.910321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.910528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.910733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.910760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.910985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.911207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.911233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.911433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.911642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.911668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.911866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.912097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.912122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.912360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.912605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.912631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.912896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.913119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.913150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.913445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.913686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.913719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.913941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.914182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.914207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.914423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.914647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.914673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.914880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.915273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.915298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.915533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.915781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.915822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.916097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.916354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.378 [2024-05-13 03:12:05.916396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.378 qpair failed and we were unable to recover it. 00:31:15.378 [2024-05-13 03:12:05.916669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.916898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.916925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.917138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.917444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.917484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.917810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.918030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.918056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.918257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.379 [2024-05-13 03:12:05.918480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.918512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.918757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.918944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.918971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.919225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.919435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.919460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.919661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.919940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.919968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.920237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.920545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.920587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.920822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.921034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.921059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.921280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.921501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.921528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.921786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.922008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.922036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.922251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.922509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.922536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.922769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.923064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.923090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.923313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.923539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.923570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.923830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.924059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.924086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.924346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.924579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.924606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.924858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.925094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.925137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.925391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.925598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.925625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.925873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.926108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.926136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.926356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.926563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.926590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.926856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.927100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.927126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.927134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:15.379 [2024-05-13 03:12:05.927336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.927556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.927582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.927826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.928055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.928082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.928327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.928553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.928578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.928802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.929027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.929052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.929279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.929564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.929589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.929873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.930114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.930154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.930376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.930628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.930654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.930884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.931136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.931162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.931372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.931692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.931737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.932088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.932395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.932425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.932730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.932930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.932957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.933244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.933529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.933554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.933792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.934016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.934042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.934325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.934572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.934612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.934869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.935083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.935109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.935329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.935573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.935599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.935812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.936048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.936074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.936391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.936691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.936725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.937004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.937272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.937299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.937586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.937857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.937885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.938121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.938360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.938386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.938619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.938896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.938923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.939109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.939321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.939348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.939590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.939820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.939847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.940094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.940313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.940340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.940566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.940810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.940838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.941256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.941489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.941515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.941752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.941933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.941959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.942209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.942560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.942584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.942851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.943046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.943072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.943279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.943556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.943583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.379 qpair failed and we were unable to recover it. 00:31:15.379 [2024-05-13 03:12:05.943787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.944019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.379 [2024-05-13 03:12:05.944046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.944234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.944437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.944467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.944678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.944873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.944899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.945107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.945352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.945378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.945577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.945798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.945825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.946100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.946337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.946363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.946614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.946834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.946862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.947107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.947350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.947391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.947608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.947862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.947889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.948114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.948340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.948365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.948626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.948816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.948843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.949067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.949279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.949308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.949559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.949796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.949823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.950047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.950290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.950317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.950530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.950716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.950743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.950935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.951216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.951241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.951477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.951691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.951739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.951979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.952209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.952235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.952435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.952649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.952675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.952945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.953140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.953166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.953422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.953649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.953676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.953908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.954134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.954163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.954390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.954611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.954638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.954865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.955108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.955134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.955379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.955736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.955763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.955982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.956244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.956269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.956534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.956811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.956837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.957078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.957270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.957296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.957553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.957775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.957802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.958043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.958293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.958334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.958592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.958790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.958816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.959021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.380 [2024-05-13 03:12:05.959035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.959274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.959300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.959537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.959809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.959837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.960098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.960292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.960317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.960601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.960964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.960989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.961303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.961686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.961727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.961956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.962241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.962266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.962578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.962806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.962833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.963080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.963316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.963342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.963536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.963900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.963925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.964179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.964404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.964430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.964704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.964903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.964933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.965175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.965371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.965399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.965614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.965848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.380 [2024-05-13 03:12:05.965874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.380 qpair failed and we were unable to recover it. 00:31:15.380 [2024-05-13 03:12:05.966135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.966332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.966358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.966609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.966838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.966865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.967133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.967380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.967406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.967619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.967867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.967910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.968175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.968524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.968549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.968819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.969065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.969105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.969331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.969531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.969556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.969777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.970008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.970053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.970341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.970605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.970644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.971081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.971368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.971397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.971633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.971855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.971883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.972135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.972364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.972389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.972617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.972816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.972843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.973085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.973295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.973321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.973575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.973772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.973799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.973989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.974194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.974220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.974453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.974727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.974755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.974947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.975184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.975214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.975431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.975678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.975727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.975973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.976179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.976206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.976463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.976674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.976706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.976919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.977225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.977251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.977557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.977832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.977861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.978139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.978382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.978411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.978703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.978911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.978938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.979269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.979528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.979570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.979828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.980163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.980188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.980494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.980741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.980773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.980995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.981240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.981282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.981540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.981842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.981869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.982086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.982356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.982383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.982615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.982840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.982868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.983088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.983295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.983322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.983564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.983781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.983809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.984053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.984287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.984314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.984571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.984822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.984850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.985056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.985294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.985335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.985587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.985810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.985838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.986107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.986302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.986328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.986535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.986764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.986808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.987051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.987334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.987360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.987651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.987890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.987917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.988147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.988356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.988383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.988610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.988856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.988885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.989094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.989477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.989518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.989795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.990023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.990051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.990308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.990585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.990612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.990864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.991083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.991110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.991409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.991663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.991711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.992003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.992214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.992240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.992549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.992809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.992837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.993058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.993281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.993309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.993626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.993837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.993865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.994072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.994271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.994299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.994539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.994782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.994810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.381 qpair failed and we were unable to recover it. 00:31:15.381 [2024-05-13 03:12:05.995086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.381 [2024-05-13 03:12:05.995333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.995374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:05.995651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.995958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.995985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:05.996195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.996441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.996483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:05.996804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.997113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.997140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:05.997366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.997592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.997618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:05.997858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.998100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.998127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:05.998367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.998612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.998639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:05.998886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.999114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.999141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:05.999368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.999634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:05.999661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:05.999917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.000145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.000171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.000405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.000663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.000711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.001011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.001232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.001259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.001481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.001682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.001721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.001948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.002188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.002229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.002484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.002729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.002757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.002980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.003242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.003284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.003510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.003712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.003739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.003970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.004168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.004194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.004384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.004713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.004740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.004944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.005271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.005312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.005565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.005799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.005828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.006050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.006240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.006268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.006487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.006707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.006736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.006959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.007208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.007248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.007502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.007719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.007746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.007936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.008218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.008244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.008518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.008773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.008799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.009026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.009261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.009287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.009524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.009748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.009774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.010015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.010268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.010308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.010492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.010717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.010744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.011019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.011223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.011249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.011496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.011718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.011745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.012165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.012440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.012470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.012673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.012935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.012963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.013226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.013528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.013554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.013817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.014075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.014115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.014322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.014620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.014646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.014868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.015080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.015107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.015333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.015573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.015599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.015834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.016043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.016070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.016332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.016572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.016598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.016798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.017033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.017059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.017295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.017497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.017523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.017754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.017990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.018016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.018271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.018567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.018592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.018810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.019039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.019067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.019284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.019527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.019553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.382 [2024-05-13 03:12:06.019735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.019918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.382 [2024-05-13 03:12:06.019945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.382 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.020193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.020424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.020450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.020783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.021031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.021057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.021265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.021555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.021581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.021871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.022090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.022117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.022388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.022631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.022658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.022904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.023108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.023135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.023325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.023545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.023571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.023830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.024047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.024074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.024330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.024621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.024647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.024920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.025173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.025216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.025440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.025684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.025731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.025984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.026223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.026250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.026508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.026730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.026759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.026955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.027250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.027277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.027491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.027715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.027742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.027941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.028138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.028163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.028381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.028630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.028657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.028854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.029057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.029083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.029344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.029558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.029585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.029806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.030031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.030057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.030288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.030472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.030501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.030828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.031114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.031142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.031384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.031627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.031670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.032038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.032404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.032430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.032654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.032937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.032965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.033181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.033391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.033417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.033726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.033952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.033979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.034235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.034443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.034469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.034814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.035050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.035093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.035401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.035707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.035735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.035937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.036136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.036164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.036425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.036676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.036711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.037074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.037359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.037389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.037664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.037869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.037898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.038100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.038331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.038359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.038621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.038879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.038909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.039160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.039375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.039403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.039673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.039909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.039954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.040228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.040465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.040491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.040717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.040986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.041012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.041378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.041663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.041690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.041939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.042156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.042181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.383 [2024-05-13 03:12:06.042374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.042601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.383 [2024-05-13 03:12:06.042629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.383 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.042911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.043141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.043170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.043410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.043655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.043682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.043927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.044175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.044203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.044448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.044641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.044666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.044893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.045085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.045110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.045402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.045686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.045720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.045963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.046178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.046205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.046510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.046823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.046865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.047151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.047352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.047378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.047640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.047867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.047895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.048126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.048468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.048495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.048723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.048968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.048999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.049218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.049447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.049474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.049648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.049926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.049954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.050177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.050428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.050456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.050689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.050972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.051000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.051219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.051534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.051575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.051863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.052079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.052106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.052317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.052542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.052570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.052768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.053065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.053093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.053337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.053583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.053624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.053880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.054067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.054098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.054323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.054556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.054582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.054785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.055006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.055033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.055252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.055505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.055544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.055777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.056019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.056059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.056280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.056520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.056547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.056750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.056997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.057023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.057234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.057425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.057451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.057641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.057833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.057860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.058050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.058241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.058268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.058388] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.384 [2024-05-13 03:12:06.058423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.384 [2024-05-13 03:12:06.058443] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.384 [2024-05-13 03:12:06.058456] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.384 [2024-05-13 03:12:06.058467] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.384 [2024-05-13 03:12:06.058462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.058622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:15.384 [2024-05-13 03:12:06.058768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.058796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.058786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:15.384 [2024-05-13 03:12:06.058833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:15.384 [2024-05-13 03:12:06.058836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:15.384 [2024-05-13 03:12:06.059029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.059258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.059285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.059504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.059730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.059758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.059982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.060177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.060203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.060393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.060612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.060638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.060836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.061057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.061083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.061300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.061518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.061544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.061761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.061981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.062008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.062246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.062432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.062458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.062668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.063056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.063082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.063340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.063550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.063576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.063820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.064014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.064041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.064252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.064461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.064487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.064672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.064902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.064929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.065332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.065564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.065590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.065797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.065984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.066011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.066227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.066421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.066447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.066669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.066979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.067006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.067194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.067496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.067523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.384 qpair failed and we were unable to recover it. 00:31:15.384 [2024-05-13 03:12:06.067723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-05-13 03:12:06.068021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.068048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.068341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.068533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.068561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.068755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.068975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.069004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.069225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.069433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.069460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.069815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.070062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.070088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.070458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.070716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.070743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.070942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.071155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.071182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.071429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.071641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.071668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.071892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.072081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.072107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.072326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.072553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.072582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.072784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.072995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.073022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.073323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.073539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.073566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.073765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.074065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.074091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.074272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.074512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.074539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.074754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.074982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.075010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.075314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.075533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.075561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.075767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.075981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.076008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.076229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.076445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.076472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.076665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.076896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.076924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.077146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.077361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.077392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.077640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.077890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.077918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.078137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.078329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.078356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.078583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.078782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.078810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.079029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.079220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.079246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.079457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.079672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.079704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.079920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.080132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.080159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.080347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.080566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.080592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.080803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.081017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.081044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.081343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.081557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.081584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.081770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.081955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.081987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.082170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.082411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.082438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.082655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.082876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.082903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.083095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.083342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.083369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.083559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.083805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.083833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.084046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.084268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.084294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.084471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.084692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.084743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.084938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.085149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.085175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.085356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.085579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.085606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.085793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.085981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.086008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.086190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.086406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.086437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.086622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.086953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.086982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.087203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.087421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.087447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.087659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.087916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.087944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.088174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.088373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.088402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.088614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.088913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.088940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.089127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.089341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.089367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.385 qpair failed and we were unable to recover it. 00:31:15.385 [2024-05-13 03:12:06.089547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.089733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-05-13 03:12:06.089760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.090004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.090229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.090255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.090447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.090672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.090719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.090933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.091114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.091144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.091327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.091528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.091553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.091743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.091952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.091978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.092165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.092356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.092383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.092597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.092843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.092870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.093053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.093290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.093317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.093530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.093776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.093804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.094021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.094231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.094258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.094476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.094662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.094689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.094947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.095134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.095161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.095379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.095570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.095596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.095835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.096064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.096090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.096308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.096520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.096547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.096771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.097011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.097037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.097247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.097463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.097490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.097721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.097905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.097931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.098148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.098368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.098394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.098609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.098801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.098828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.099025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.099211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.099238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.099449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.099664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.099691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.099921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.100116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.100142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.100356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.100542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.100568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.100769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.101023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.101051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.101247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.101449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.101476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.101666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.101864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.101893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.102137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.102324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.102351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.102615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.102837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.102864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.103097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.103282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.103309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.103500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.103770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.103796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.103991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.104200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.104227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.104414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.104637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.104663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.104932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.105149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.105175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.105392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.105612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.105638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.105913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.106096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.106122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.106320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.106534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.106561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.106748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.106965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.106991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.107187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.107403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.107429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.107680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.107916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.107942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.108128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.108327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.108356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.108574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.108787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.108815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.108998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.109189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.109217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.109435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.109627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.109655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.109880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.110097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.110124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.110317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.110508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.110534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.110753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.110978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.111006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.111198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.111445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.111472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.111701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.111895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.111924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.112148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.112371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.112398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.112609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.112821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.112849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.113030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.113221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.113248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.386 qpair failed and we were unable to recover it. 00:31:15.386 [2024-05-13 03:12:06.113469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.113650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-05-13 03:12:06.113677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.113877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.114098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.114126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.114317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.114533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.114560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.114774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.114959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.114987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.115197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.115411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.115438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.115653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.115894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.115922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.116134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.116377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.116403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.116651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.116842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.116871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.117093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.117289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.117317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.117499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.117701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.117729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.117912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.118095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.118123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.118318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.118534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.118561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.118769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.118989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.119017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.119204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.119402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.119429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.119646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.120085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.120127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.120417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.120643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.120670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.120901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.121086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.121113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.121331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.121547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.121574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.121947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.122214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.122240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.122465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.122689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.122723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.122925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.123141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.123168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.123391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.123594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.123623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.123839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.124060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.124088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.124313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.124495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.124521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.124746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.124932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.124959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.125135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.125351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.125377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.125573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.125797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.125825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.126042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.126257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.126284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.126503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.126691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.126725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.126944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.127159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.127186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.127409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.127606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.127633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.127864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.128059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.128086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.128450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.128706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.128746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.128956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.129161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.129189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.129384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.129603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.129629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.129845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.130056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.130083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.130275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.130484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.130511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.130707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.130900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.130926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.131125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.131349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.131376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.131585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.131832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.131860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.132061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.132258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.132286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.132504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.132712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.132740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.132925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.133143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.133170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.133392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.133589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.133616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.133835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.134053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.134081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.134276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.134457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.134484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.134727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.134980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.135007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.135220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.135438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.135465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.135650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.135884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.135911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.136127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.136493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.136519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.136721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.137135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.137177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.137372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.137599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.137626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.137847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.138032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.138058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.138305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.138488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.138514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.387 [2024-05-13 03:12:06.138706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.138893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.387 [2024-05-13 03:12:06.138922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.387 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.139118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.139305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.139333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.139575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.139758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.139786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.140032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.140241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.140268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.140454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.140674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.140711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.140929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.141133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.141160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.141406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.141590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.141616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.141817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.142027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.142053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.142290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.142538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.142564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.142751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.142949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.142975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.143170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.143384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.143411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.143605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.143843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.143870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.144092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.144291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.144318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.144511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.144732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.144759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.144939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.145119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.145146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.145334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.145549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.145577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.145767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.145985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.146013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.146232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.146447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.146474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.146680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.146897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.146924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.147135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.147332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.147358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.147570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.147764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.147793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.147988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.148172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.148199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.148415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.148615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.148641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.148849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.149038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.149064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.149277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.149453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.149479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.149663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.149891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.149919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.150101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.150315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.150342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.150553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.150745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.150778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.150982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.151203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.151229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.151439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.151657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.151683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.151878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.152058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.152084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.152297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.152516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.152542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.152762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.152972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.152999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.153198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.153374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.153400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.153584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.153767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.153795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.153986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.154196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.154222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.154408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.154617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.154644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.154879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.155092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.155122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.155335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.155571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.155597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.155803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.155995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.156023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.156241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.156463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.156491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.156709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.156919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.156945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.157136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.157350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.157377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.157585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.157788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.157816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.158004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.158225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.158252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.158473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.161899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.161942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.162146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.162388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.162416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.162661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.162894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.162927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.163182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.163399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.163425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.163664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.163892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.163919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.388 [2024-05-13 03:12:06.164155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.164366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.388 [2024-05-13 03:12:06.164393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.388 qpair failed and we were unable to recover it. 00:31:15.389 [2024-05-13 03:12:06.164591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.164811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.164839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.389 qpair failed and we were unable to recover it. 00:31:15.389 [2024-05-13 03:12:06.165059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.165255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.165283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.389 qpair failed and we were unable to recover it. 00:31:15.389 [2024-05-13 03:12:06.165491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.165741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.165771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.389 qpair failed and we were unable to recover it. 00:31:15.389 [2024-05-13 03:12:06.165954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.166147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.166175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.389 qpair failed and we were unable to recover it. 00:31:15.389 [2024-05-13 03:12:06.166356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.166564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.166591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.389 qpair failed and we were unable to recover it. 00:31:15.389 [2024-05-13 03:12:06.166814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.167005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.389 [2024-05-13 03:12:06.167033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.389 qpair failed and we were unable to recover it. 00:31:15.389 [2024-05-13 03:12:06.167248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.167450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.167479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.653 qpair failed and we were unable to recover it. 00:31:15.653 [2024-05-13 03:12:06.167722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.167933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.167969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.653 qpair failed and we were unable to recover it. 00:31:15.653 [2024-05-13 03:12:06.168211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.168421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.168448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.653 qpair failed and we were unable to recover it. 00:31:15.653 [2024-05-13 03:12:06.168662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.168888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.168915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.653 qpair failed and we were unable to recover it. 00:31:15.653 [2024-05-13 03:12:06.169136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.169317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.169344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.653 qpair failed and we were unable to recover it. 00:31:15.653 [2024-05-13 03:12:06.169562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.169775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.653 [2024-05-13 03:12:06.169802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.653 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.170010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.170225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.170252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.170471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.170671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.170706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.170933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.171134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.171161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.171360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.171557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.171587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.171785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.172001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.172028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.172249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.172472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.172501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.172728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.172948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.172980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.173223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.173440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.173467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.173681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.173891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.173919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.174171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.174440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.174466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.174662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.174875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.174903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.175122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.175384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.175411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.175599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.175820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.175848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.176079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.176266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.176293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.176481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.176719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.176757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.177008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.177255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.177282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.177486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.177704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.177731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.177956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.178168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.178195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.178402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.178583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.178611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.178833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.179019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.179046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.179239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.179453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.179480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.179700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.179918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.179946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.180137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.180361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.180388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.180573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.180764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.180793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.181031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.181221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.181249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.181467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.181662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.181689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.181942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.182163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.182189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.182433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.182641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.182668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.182874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.183070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.183097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.654 qpair failed and we were unable to recover it. 00:31:15.654 [2024-05-13 03:12:06.183288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.654 [2024-05-13 03:12:06.183503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.183531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.183729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.183942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.183970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.184185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.184375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.184402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.184618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.184798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.184825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.185011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.185205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.185242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.185461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.185705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.185734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.185956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.186210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.186238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.186449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.186690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.186733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.186954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.187170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.187198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.187444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.187633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.187661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.187893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.188094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.188121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.188339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.188522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.188549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.188749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.188958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.188985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.189199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.189413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.189440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.189648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.189842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.189870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.190083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.190267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.190294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.190529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.190717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.190745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.190967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.191156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.191183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.191395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.191588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.191615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.191808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.192005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.192032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.192222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.192444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.192471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.192690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.192895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.192923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.193139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.193328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.193355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.193566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.193788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.193816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.194011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.194230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.194258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.194477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.194718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.194746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.194994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.195226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.195252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.195464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.195650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.195677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.195900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.196113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.196139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.196327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.196526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.655 [2024-05-13 03:12:06.196553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.655 qpair failed and we were unable to recover it. 00:31:15.655 [2024-05-13 03:12:06.196769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:15.656 [2024-05-13 03:12:06.196980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.197007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:31:15.656 [2024-05-13 03:12:06.197191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.197399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.197427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:15.656 [2024-05-13 03:12:06.197641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.656 [2024-05-13 03:12:06.197859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.197888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.656 [2024-05-13 03:12:06.198086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.198307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.198334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.198512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.198692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.198738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.198942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.199124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.199150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.199356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.199577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.199603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.199821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.200060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.200087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.200302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.200497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.200525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.200717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.200921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.200953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.201164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.201384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.201412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.201601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.201851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.201878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.202097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.202304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.202333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.202522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.202714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.202751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.202944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.203164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.203191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.203378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.203561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.203588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.203814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.204029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.204066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.204255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.204443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.204470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.204653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.204875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.204902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.205147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.205333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.205360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.205572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.205783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.205810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.206032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.206219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.206245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.206455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.206641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.206667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.206903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.207119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.207146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.207359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.207543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.207571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.207795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.207990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.208017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.208196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.208381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.208408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.208621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.208840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.656 [2024-05-13 03:12:06.208867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.656 qpair failed and we were unable to recover it. 00:31:15.656 [2024-05-13 03:12:06.209049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.209290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.209317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.209557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.209769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.209796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.210038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.210223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.210250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.210437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.210655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.210682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.210887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.211085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.211116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.211301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.211510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.211537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.211733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.211927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.211955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.212154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.212369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.212396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.212580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.212820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.212847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.213038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.213246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.213272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.213512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.213751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.213778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.213978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.214157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.214183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.214374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.214555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.214581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.214805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.215022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.215049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.215263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.215446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.215472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.215683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.215910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.215937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.216152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.216369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.216396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.216613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.216827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.216854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.217048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.217256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.217283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.217487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.217707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.217736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.217958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.218173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.218200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.218446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.218630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.218657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.218882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.219096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.219123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.219311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.219555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.219582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.219832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.220042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.220068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.220311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.220526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.220553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.220736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.220934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.220963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.221187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.221415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.221443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.657 [2024-05-13 03:12:06.221628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.657 [2024-05-13 03:12:06.221849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.657 [2024-05-13 03:12:06.221878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.657 qpair failed and we were unable to recover it. 00:31:15.658 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:15.658 [2024-05-13 03:12:06.222073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.658 [2024-05-13 03:12:06.222291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.222318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.222540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.222737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.222765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.222985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.223196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.223222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.223435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.223652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.223680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.223876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.224060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.224086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.224281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.224495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.224521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.224752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.224943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.224976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.225175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.225391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.225419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.225643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.225862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.225889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.226100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.226284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.226311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.226509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.226702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.226730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.226914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.227128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.227155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.227402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.227620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.227647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.227883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.228101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.228127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.228319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.228511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.228537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.228750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.228965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.228991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.229209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.229398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.229426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.229644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.229867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.229894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.230124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.230336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.230362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.230556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.230769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.230797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.231017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.231207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.231235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.231427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.231639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.231666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.231899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.232093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.232119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.658 qpair failed and we were unable to recover it. 00:31:15.658 [2024-05-13 03:12:06.232333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.232526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.658 [2024-05-13 03:12:06.232552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.232764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.233101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.233127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.233342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.233556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.233583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.233826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.234049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.234076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.234293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.234481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.234509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.234724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.234915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.234941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.235333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.235733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.235782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.235998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.236195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.236221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.236574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.236863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.236890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.237083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.237307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.237333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.237520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.237741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.237768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.237979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.238198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.238225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.238440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.238656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.238683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.238890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.239242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.239267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.239687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.239921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.239948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.240171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.240386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.240412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.240596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.240794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.240821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.241038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.241236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.241265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.241447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.241623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.241650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.241874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.242085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.242111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.242294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.242541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.242568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.242785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.242985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.243011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.243231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.243449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.243477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.243705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.243923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.243949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.244150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.244329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.244355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.244568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.244783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.244811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.245054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.245247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.245274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.245466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.245666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.245702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.245925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.246145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.246172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.246354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.246544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.246570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.659 qpair failed and we were unable to recover it. 00:31:15.659 [2024-05-13 03:12:06.246761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.659 [2024-05-13 03:12:06.246947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.246975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.247222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.247410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.247438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.247657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 Malloc0 00:31:15.660 [2024-05-13 03:12:06.247880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.247907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.660 [2024-05-13 03:12:06.248091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:15.660 [2024-05-13 03:12:06.248289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.248317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.660 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.660 [2024-05-13 03:12:06.248531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.248722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.248749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.248951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.249191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.249218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.249461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.249684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.249719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.249934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.250148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.250174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.250386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.250601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.250626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.250835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.251052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.251078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.251295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.251433] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.660 [2024-05-13 03:12:06.251478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.251503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.251743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.251989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.252015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.252197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.252410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.252441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.252662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.252884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.252912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.253100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.253312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.253338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.253531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.253745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.253772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.253956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.254140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.254167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.254363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.254576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.254602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.254824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.255024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.255051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.255271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.255484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.255510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.255731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.255917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.255943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.256163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.256403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.256430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.256621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.256853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.256885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.257076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.257285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.257311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.257535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.257751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.257778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.258002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.258215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.258242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.258452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.258646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.258674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.258906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.259123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.259150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.660 qpair failed and we were unable to recover it. 00:31:15.660 [2024-05-13 03:12:06.259356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.660 [2024-05-13 03:12:06.259575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.259602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.661 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:15.661 [2024-05-13 03:12:06.259849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.661 [2024-05-13 03:12:06.260047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.661 [2024-05-13 03:12:06.260074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.260337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.260583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.260611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.260835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.261056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.261084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.261301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.261493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.261522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.261769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.262006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.262033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.262220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.262410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.262437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.262647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.262831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.262858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.263049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.263267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.263294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.263542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.263775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.263802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.264028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.264242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.264269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.264492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.264710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.264750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.264969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.265158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.265185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.265405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.265654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.265681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.265913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.266100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.266127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.266323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.266569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.266596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.266821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.267009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.267036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.267256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.267471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.267498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.661 [2024-05-13 03:12:06.267725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.661 [2024-05-13 03:12:06.267962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.267997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.661 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.661 [2024-05-13 03:12:06.268185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.268382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.268409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.268595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.268809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.268836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.269031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.269244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.269271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.269480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.269707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.269746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.269959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.270188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.270215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.270423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.270604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.270631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.270843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.271054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.271081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.271305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.271519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.271546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.661 qpair failed and we were unable to recover it. 00:31:15.661 [2024-05-13 03:12:06.271765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.661 [2024-05-13 03:12:06.271957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.271996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.272210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.272437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.272464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.272682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.272888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.272915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.273130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.273346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.273374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.273590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.273775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.273803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.273993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.274244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.274272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.274518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.274764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.274792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.275009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.275191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.275218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.275415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.275639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.275667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.662 [2024-05-13 03:12:06.275893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.662 [2024-05-13 03:12:06.276111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.276138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.276337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.276551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.276578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.276792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.277007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.277034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.277258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.277447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.277474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.277717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.277900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.277927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.278176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.278396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.278422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.278604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.278848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.278885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.279102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.279289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.279315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0044000b90 with addr=10.0.0.2, port=4420 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.279426] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:15.662 [2024-05-13 03:12:06.279537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.662 [2024-05-13 03:12:06.279705] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.662 [2024-05-13 03:12:06.282785] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:31:15.662 [2024-05-13 03:12:06.282849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0044000b90 (107): Transport endpoint is not connected 00:31:15.662 [2024-05-13 03:12:06.282925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.662 03:12:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 485925 00:31:15.662 [2024-05-13 03:12:06.292205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.662 [2024-05-13 03:12:06.292446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.662 [2024-05-13 03:12:06.292476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.662 [2024-05-13 03:12:06.292494] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.662 [2024-05-13 03:12:06.292507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.662 [2024-05-13 03:12:06.292539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.662 qpair failed and we were unable to recover it. 00:31:15.662 [2024-05-13 03:12:06.302141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.662 [2024-05-13 03:12:06.302360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.662 [2024-05-13 03:12:06.302393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.302409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.302422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.302453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.312160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.312356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.312382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.312397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.312411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.312441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.322165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.322402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.322432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.322453] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.322467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.322498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.332152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.332362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.332388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.332403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.332416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.332446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.342137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.342326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.342352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.342366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.342379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.342417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.352214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.352452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.352480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.352495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.352508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.352538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.362322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.362518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.362543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.362557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.362571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.362602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.372227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.372435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.372465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.372481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.372495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.372525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.382295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.382512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.382539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.382554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.382567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.382598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.392278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.392478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.392505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.392519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.392532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.392563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.402302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.402487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.402514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.402529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.402541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.402571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.412340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.412573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.412600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.412615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.412628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.412658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.422371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.422658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.422714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.422733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.422746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.663 [2024-05-13 03:12:06.422778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.663 qpair failed and we were unable to recover it. 00:31:15.663 [2024-05-13 03:12:06.432385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.663 [2024-05-13 03:12:06.432608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.663 [2024-05-13 03:12:06.432634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.663 [2024-05-13 03:12:06.432649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.663 [2024-05-13 03:12:06.432666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.664 [2024-05-13 03:12:06.432720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.664 qpair failed and we were unable to recover it. 00:31:15.664 [2024-05-13 03:12:06.442429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.664 [2024-05-13 03:12:06.442615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.664 [2024-05-13 03:12:06.442642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.664 [2024-05-13 03:12:06.442657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.664 [2024-05-13 03:12:06.442670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.664 [2024-05-13 03:12:06.442712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.664 qpair failed and we were unable to recover it. 00:31:15.959 [2024-05-13 03:12:06.452440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.959 [2024-05-13 03:12:06.452632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.959 [2024-05-13 03:12:06.452658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.959 [2024-05-13 03:12:06.452673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.959 [2024-05-13 03:12:06.452686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.959 [2024-05-13 03:12:06.452727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.959 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.462555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.462771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.462799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.462813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.462826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.462856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.472513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.472721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.472747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.472762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.472775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.472804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.482524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.482721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.482748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.482763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.482776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.482806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.492541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.492738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.492765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.492779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.492792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.492822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.502689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.502927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.502953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.502968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.502982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.503013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.512593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.512829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.512856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.512871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.512883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.512926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.522636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.522841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.522868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.522889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.522903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.522934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.532652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.532842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.532868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.532884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.532896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.532927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.542796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.542996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.543022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.543037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.543050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.543080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.552760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.552958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.552984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.553014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.553028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.553058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.562755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.562938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.562965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.562979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.562992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.563022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.572783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.572965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.960 [2024-05-13 03:12:06.572991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.960 [2024-05-13 03:12:06.573006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.960 [2024-05-13 03:12:06.573018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.960 [2024-05-13 03:12:06.573047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.960 qpair failed and we were unable to recover it. 00:31:15.960 [2024-05-13 03:12:06.582826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.960 [2024-05-13 03:12:06.583008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.583034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.583049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.583062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.583092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.592816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.593016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.593042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.593057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.593070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.593100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.602912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.603108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.603134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.603149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.603161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.603192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.612927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.613122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.613153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.613169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.613182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.613211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.622944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.623226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.623251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.623266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.623278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.623335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.633033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.633275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.633301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.633316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.633328] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.633359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.643000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.643218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.643244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.643258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.643271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.643301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.653063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.653253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.653280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.653295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.653307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.653343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.663130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.663328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.663355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.663370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.663383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.663412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.673167] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.673365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.673406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.673420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.673433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.673477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.683097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.683290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.683316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.683330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.683343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.683373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.693128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.693325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.693367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.693383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.693395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.693439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.703164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.703390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.703422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.703438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.703450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.703495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.961 qpair failed and we were unable to recover it. 00:31:15.961 [2024-05-13 03:12:06.713234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.961 [2024-05-13 03:12:06.713442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.961 [2024-05-13 03:12:06.713467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.961 [2024-05-13 03:12:06.713482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.961 [2024-05-13 03:12:06.713495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.961 [2024-05-13 03:12:06.713524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.962 qpair failed and we were unable to recover it. 00:31:15.962 [2024-05-13 03:12:06.723226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.962 [2024-05-13 03:12:06.723410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.962 [2024-05-13 03:12:06.723436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.962 [2024-05-13 03:12:06.723451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.962 [2024-05-13 03:12:06.723464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.962 [2024-05-13 03:12:06.723494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.962 qpair failed and we were unable to recover it. 00:31:15.962 [2024-05-13 03:12:06.733240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.962 [2024-05-13 03:12:06.733485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.962 [2024-05-13 03:12:06.733510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.962 [2024-05-13 03:12:06.733524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.962 [2024-05-13 03:12:06.733537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.962 [2024-05-13 03:12:06.733582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.962 qpair failed and we were unable to recover it. 00:31:15.962 [2024-05-13 03:12:06.743246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.962 [2024-05-13 03:12:06.743426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.962 [2024-05-13 03:12:06.743452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.962 [2024-05-13 03:12:06.743467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.962 [2024-05-13 03:12:06.743480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.962 [2024-05-13 03:12:06.743515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.962 qpair failed and we were unable to recover it. 00:31:15.962 [2024-05-13 03:12:06.753417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.962 [2024-05-13 03:12:06.753650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.962 [2024-05-13 03:12:06.753690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.962 [2024-05-13 03:12:06.753717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.962 [2024-05-13 03:12:06.753730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:15.962 [2024-05-13 03:12:06.753774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:15.962 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.763321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.763514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.763540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.763555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.763568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.221 [2024-05-13 03:12:06.763598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.221 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.773384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.773578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.773605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.773624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.773653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.221 [2024-05-13 03:12:06.773683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.221 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.783373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.783563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.783589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.783604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.783617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.221 [2024-05-13 03:12:06.783647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.221 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.793412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.793600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.793631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.793647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.793660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.221 [2024-05-13 03:12:06.793689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.221 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.803462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.803655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.803681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.803707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.803723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.221 [2024-05-13 03:12:06.803766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.221 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.813514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.813798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.813826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.813845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.813858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.221 [2024-05-13 03:12:06.813889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.221 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.823468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.823652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.823679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.823694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.823717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.221 [2024-05-13 03:12:06.823748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.221 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.833555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.833752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.833779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.833794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.833813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.221 [2024-05-13 03:12:06.833844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.221 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.843596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.843800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.843828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.843846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.843859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.221 [2024-05-13 03:12:06.843891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.221 qpair failed and we were unable to recover it. 00:31:16.221 [2024-05-13 03:12:06.853558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.221 [2024-05-13 03:12:06.853754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.221 [2024-05-13 03:12:06.853781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.221 [2024-05-13 03:12:06.853796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.221 [2024-05-13 03:12:06.853809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.853838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.863633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.863827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.863854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.863869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.863882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.863912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.873766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.874016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.874042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.874056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.874069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.874126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.883727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.883928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.883953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.883967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.883980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.884010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.893722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.893908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.893934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.893949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.893962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.893992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.903797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.903981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.904008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.904023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.904035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.904065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.913775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.913966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.913993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.914008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.914021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.914063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.923774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.923973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.924000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.924020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.924034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.924064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.933806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.934050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.934077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.934093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.934105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.934135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.943844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.944035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.944077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.944092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.944104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.944148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.953867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.954071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.954097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.954112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.954125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.954154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.963873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.964062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.964088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.964103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.964115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.964145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.973888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.974075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.974101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.974116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.974128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.974158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.222 [2024-05-13 03:12:06.983997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.222 [2024-05-13 03:12:06.984204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.222 [2024-05-13 03:12:06.984229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.222 [2024-05-13 03:12:06.984244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.222 [2024-05-13 03:12:06.984256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.222 [2024-05-13 03:12:06.984300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.222 qpair failed and we were unable to recover it. 00:31:16.223 [2024-05-13 03:12:06.993985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.223 [2024-05-13 03:12:06.994256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.223 [2024-05-13 03:12:06.994282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.223 [2024-05-13 03:12:06.994297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.223 [2024-05-13 03:12:06.994309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.223 [2024-05-13 03:12:06.994353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.223 qpair failed and we were unable to recover it. 00:31:16.223 [2024-05-13 03:12:07.003998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.223 [2024-05-13 03:12:07.004195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.223 [2024-05-13 03:12:07.004221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.223 [2024-05-13 03:12:07.004235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.223 [2024-05-13 03:12:07.004248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.223 [2024-05-13 03:12:07.004291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.223 qpair failed and we were unable to recover it. 00:31:16.223 [2024-05-13 03:12:07.014140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.223 [2024-05-13 03:12:07.014372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.223 [2024-05-13 03:12:07.014398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.223 [2024-05-13 03:12:07.014417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.223 [2024-05-13 03:12:07.014431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.223 [2024-05-13 03:12:07.014476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.223 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.024139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.024327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.024353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.024368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.024381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.024423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.034110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.034304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.034344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.034358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.034371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.034415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.044187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.044386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.044413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.044428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.044440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.044483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.054122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.054317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.054344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.054358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.054371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.054400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.064171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.064367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.064393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.064408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.064421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.064451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.074291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.074501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.074526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.074541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.074554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.074584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.084273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.084512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.084538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.084553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.084566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.084596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.094310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.094593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.094619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.094634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.094650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.094705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.104384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.104585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.104616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.104632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.104645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.104676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.114319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.114527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.114567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.114582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.114594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.114638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.481 qpair failed and we were unable to recover it. 00:31:16.481 [2024-05-13 03:12:07.124403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.481 [2024-05-13 03:12:07.124597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.481 [2024-05-13 03:12:07.124624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.481 [2024-05-13 03:12:07.124639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.481 [2024-05-13 03:12:07.124652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.481 [2024-05-13 03:12:07.124681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.134330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.134520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.134546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.134561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.134573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.134602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.144453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.144646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.144672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.144687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.144710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.144747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.154404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.154602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.154628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.154643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.154655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.154685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.164441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.164664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.164693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.164717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.164730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.164761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.174591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.174796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.174823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.174837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.174850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.174880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.184524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.184746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.184773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.184788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.184800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.184830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.194610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.194819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.194850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.194866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.194878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.194908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.204647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.204846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.204873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.204887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.204900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.204929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.214556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.214761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.214787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.214802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.214815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.214844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.224591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.224791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.224818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.224833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.224846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.224877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.234701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.234907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.234933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.234948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.234966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.234997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.244736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.244976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.245003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.482 [2024-05-13 03:12:07.245018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.482 [2024-05-13 03:12:07.245031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.482 [2024-05-13 03:12:07.245065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.482 qpair failed and we were unable to recover it. 00:31:16.482 [2024-05-13 03:12:07.254735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.482 [2024-05-13 03:12:07.254925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.482 [2024-05-13 03:12:07.254952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.483 [2024-05-13 03:12:07.254966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.483 [2024-05-13 03:12:07.254979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.483 [2024-05-13 03:12:07.255015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.483 qpair failed and we were unable to recover it. 00:31:16.483 [2024-05-13 03:12:07.264726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.483 [2024-05-13 03:12:07.264914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.483 [2024-05-13 03:12:07.264939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.483 [2024-05-13 03:12:07.264954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.483 [2024-05-13 03:12:07.264967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.483 [2024-05-13 03:12:07.264997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.483 qpair failed and we were unable to recover it. 00:31:16.483 [2024-05-13 03:12:07.274910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.483 [2024-05-13 03:12:07.275191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.483 [2024-05-13 03:12:07.275216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.483 [2024-05-13 03:12:07.275230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.483 [2024-05-13 03:12:07.275243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.483 [2024-05-13 03:12:07.275299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.483 qpair failed and we were unable to recover it. 00:31:16.741 [2024-05-13 03:12:07.284759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.741 [2024-05-13 03:12:07.284951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.741 [2024-05-13 03:12:07.284977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.741 [2024-05-13 03:12:07.284991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.741 [2024-05-13 03:12:07.285004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.741 [2024-05-13 03:12:07.285045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.741 qpair failed and we were unable to recover it. 00:31:16.741 [2024-05-13 03:12:07.294826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.741 [2024-05-13 03:12:07.295078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.741 [2024-05-13 03:12:07.295104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.741 [2024-05-13 03:12:07.295118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.741 [2024-05-13 03:12:07.295130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.741 [2024-05-13 03:12:07.295172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.741 qpair failed and we were unable to recover it. 00:31:16.741 [2024-05-13 03:12:07.304924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.741 [2024-05-13 03:12:07.305111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.741 [2024-05-13 03:12:07.305137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.741 [2024-05-13 03:12:07.305151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.741 [2024-05-13 03:12:07.305164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.741 [2024-05-13 03:12:07.305194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.741 qpair failed and we were unable to recover it. 00:31:16.741 [2024-05-13 03:12:07.314959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.741 [2024-05-13 03:12:07.315165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.741 [2024-05-13 03:12:07.315191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.741 [2024-05-13 03:12:07.315206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.741 [2024-05-13 03:12:07.315219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.741 [2024-05-13 03:12:07.315261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.741 qpair failed and we were unable to recover it. 00:31:16.741 [2024-05-13 03:12:07.324921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.741 [2024-05-13 03:12:07.325112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.741 [2024-05-13 03:12:07.325138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.741 [2024-05-13 03:12:07.325158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.741 [2024-05-13 03:12:07.325172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.741 [2024-05-13 03:12:07.325203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.741 qpair failed and we were unable to recover it. 00:31:16.741 [2024-05-13 03:12:07.334992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.741 [2024-05-13 03:12:07.335191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.741 [2024-05-13 03:12:07.335218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.741 [2024-05-13 03:12:07.335233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.741 [2024-05-13 03:12:07.335246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.741 [2024-05-13 03:12:07.335280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.741 qpair failed and we were unable to recover it. 00:31:16.741 [2024-05-13 03:12:07.345031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.741 [2024-05-13 03:12:07.345238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.741 [2024-05-13 03:12:07.345265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.741 [2024-05-13 03:12:07.345281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.741 [2024-05-13 03:12:07.345294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.741 [2024-05-13 03:12:07.345336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.741 qpair failed and we were unable to recover it. 00:31:16.741 [2024-05-13 03:12:07.355006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.741 [2024-05-13 03:12:07.355235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.741 [2024-05-13 03:12:07.355276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.741 [2024-05-13 03:12:07.355290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.741 [2024-05-13 03:12:07.355303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.741 [2024-05-13 03:12:07.355347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.741 qpair failed and we were unable to recover it. 00:31:16.741 [2024-05-13 03:12:07.365001] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.365192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.365219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.365233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.365246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.365276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.375116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.375324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.375351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.375365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.375378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.375407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.385125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.385323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.385367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.385382] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.385395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.385438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.395195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.395433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.395459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.395474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.395486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.395543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.405110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.405306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.405333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.405348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.405362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.405391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.415124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.415317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.415344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.415364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.415377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.415408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.425172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.425368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.425394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.425423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.425436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.425467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.435197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.435389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.435415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.435430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.435442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.435471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.445205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.445392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.445418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.445433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.445445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.445475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.455297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.455495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.455536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.455550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.455563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.455607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.465262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.465466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.465492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.465507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.465520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.465550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.475306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.475509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.475536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.475551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.475565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.475595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.485328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.485542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.485569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.742 [2024-05-13 03:12:07.485584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.742 [2024-05-13 03:12:07.485597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.742 [2024-05-13 03:12:07.485627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.742 qpair failed and we were unable to recover it. 00:31:16.742 [2024-05-13 03:12:07.495353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.742 [2024-05-13 03:12:07.495547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.742 [2024-05-13 03:12:07.495574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.743 [2024-05-13 03:12:07.495588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.743 [2024-05-13 03:12:07.495601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.743 [2024-05-13 03:12:07.495631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.743 qpair failed and we were unable to recover it. 00:31:16.743 [2024-05-13 03:12:07.505413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.743 [2024-05-13 03:12:07.505605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.743 [2024-05-13 03:12:07.505637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.743 [2024-05-13 03:12:07.505652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.743 [2024-05-13 03:12:07.505665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.743 [2024-05-13 03:12:07.505704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.743 qpair failed and we were unable to recover it. 00:31:16.743 [2024-05-13 03:12:07.515409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.743 [2024-05-13 03:12:07.515596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.743 [2024-05-13 03:12:07.515636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.743 [2024-05-13 03:12:07.515651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.743 [2024-05-13 03:12:07.515664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.743 [2024-05-13 03:12:07.515720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.743 qpair failed and we were unable to recover it. 00:31:16.743 [2024-05-13 03:12:07.525417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.743 [2024-05-13 03:12:07.525612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.743 [2024-05-13 03:12:07.525638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.743 [2024-05-13 03:12:07.525652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.743 [2024-05-13 03:12:07.525665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.743 [2024-05-13 03:12:07.525704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.743 qpair failed and we were unable to recover it. 00:31:16.743 [2024-05-13 03:12:07.535459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.743 [2024-05-13 03:12:07.535642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.743 [2024-05-13 03:12:07.535670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.743 [2024-05-13 03:12:07.535685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.743 [2024-05-13 03:12:07.535712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:16.743 [2024-05-13 03:12:07.535769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.743 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.545504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.545709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.545748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.545763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.001 [2024-05-13 03:12:07.545776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.001 [2024-05-13 03:12:07.545812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.001 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.555537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.555768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.555794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.555808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.001 [2024-05-13 03:12:07.555821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.001 [2024-05-13 03:12:07.555851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.001 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.565535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.565735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.565761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.565775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.001 [2024-05-13 03:12:07.565788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.001 [2024-05-13 03:12:07.565817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.001 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.575597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.575792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.575818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.575833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.001 [2024-05-13 03:12:07.575845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.001 [2024-05-13 03:12:07.575875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.001 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.585589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.585798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.585824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.585838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.001 [2024-05-13 03:12:07.585852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.001 [2024-05-13 03:12:07.585881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.001 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.595741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.595933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.595964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.595980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.001 [2024-05-13 03:12:07.595993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.001 [2024-05-13 03:12:07.596036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.001 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.605665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.605899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.605926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.605941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.001 [2024-05-13 03:12:07.605954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.001 [2024-05-13 03:12:07.605984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.001 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.615719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.615910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.615936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.615951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.001 [2024-05-13 03:12:07.615963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.001 [2024-05-13 03:12:07.616001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.001 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.625792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.625978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.626005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.626019] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.001 [2024-05-13 03:12:07.626031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.001 [2024-05-13 03:12:07.626074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.001 qpair failed and we were unable to recover it. 00:31:17.001 [2024-05-13 03:12:07.635781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.001 [2024-05-13 03:12:07.635997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.001 [2024-05-13 03:12:07.636023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.001 [2024-05-13 03:12:07.636037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.636054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.636084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.645770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.645964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.645989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.646003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.646015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.646044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.655809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.655999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.656024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.656038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.656050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.656079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.665824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.666024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.666050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.666064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.666076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.666106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.675862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.676051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.676077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.676091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.676104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.676133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.685936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.686170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.686196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.686211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.686226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.686256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.695913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.696105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.696131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.696145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.696157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.696187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.705925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.706110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.706136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.706150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.706162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.706191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.715972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.716163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.716189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.716203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.716215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.716244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.726017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.726203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.726228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.726242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.726260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.726290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.736020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.736210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.736236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.736250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.736262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.736304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.746057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.746252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.746278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.746292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.746304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.746333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.756178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.756418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.756445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.756463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.756476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.756507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.766198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.766386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.766413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.766427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.766439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.766468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.776210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.776389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.776415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.776429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.776442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.776471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.786209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.786394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.786420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.786434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.786446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.786475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.002 [2024-05-13 03:12:07.796236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.002 [2024-05-13 03:12:07.796453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.002 [2024-05-13 03:12:07.796478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.002 [2024-05-13 03:12:07.796492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.002 [2024-05-13 03:12:07.796505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.002 [2024-05-13 03:12:07.796534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.002 qpair failed and we were unable to recover it. 00:31:17.262 [2024-05-13 03:12:07.806264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-05-13 03:12:07.806513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-05-13 03:12:07.806538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-05-13 03:12:07.806552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-05-13 03:12:07.806565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.262 [2024-05-13 03:12:07.806594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-05-13 03:12:07.816298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-05-13 03:12:07.816484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-05-13 03:12:07.816509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-05-13 03:12:07.816529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-05-13 03:12:07.816542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.262 [2024-05-13 03:12:07.816571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-05-13 03:12:07.826258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-05-13 03:12:07.826443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-05-13 03:12:07.826468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-05-13 03:12:07.826482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-05-13 03:12:07.826495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.262 [2024-05-13 03:12:07.826524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-05-13 03:12:07.836317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-05-13 03:12:07.836543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-05-13 03:12:07.836568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-05-13 03:12:07.836583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-05-13 03:12:07.836595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.262 [2024-05-13 03:12:07.836625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-05-13 03:12:07.846326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-05-13 03:12:07.846515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-05-13 03:12:07.846541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-05-13 03:12:07.846556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-05-13 03:12:07.846568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.262 [2024-05-13 03:12:07.846597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-05-13 03:12:07.856362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-05-13 03:12:07.856551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-05-13 03:12:07.856577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-05-13 03:12:07.856591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-05-13 03:12:07.856603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.262 [2024-05-13 03:12:07.856632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-05-13 03:12:07.866404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-05-13 03:12:07.866612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-05-13 03:12:07.866638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-05-13 03:12:07.866651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-05-13 03:12:07.866664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.262 [2024-05-13 03:12:07.866693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-05-13 03:12:07.876435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-05-13 03:12:07.876628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-05-13 03:12:07.876653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-05-13 03:12:07.876667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-05-13 03:12:07.876680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.262 [2024-05-13 03:12:07.876717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-05-13 03:12:07.886531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-05-13 03:12:07.886720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-05-13 03:12:07.886745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.886758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.886771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.886814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.896475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.896663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.896688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.896712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.896726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.896756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.906583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.906777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.906808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.906824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.906836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.906866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.916630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.916821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.916847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.916861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.916873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.916902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.926584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.926788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.926814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.926828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.926841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.926870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.936587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.936811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.936837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.936851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.936863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.936893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.946722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.946917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.946942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.946957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.946969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.947004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.956758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.956956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.956982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.956996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.957009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.957038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.966682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.966901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.966927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.966941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.966954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.966983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.976687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.976880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.976906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.976920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.976932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.976962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.986723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.986909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.986935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.986949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.986961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.986991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:07.996772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:07.996966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:07.996996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:07.997011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:07.997023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:07.997052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:08.006812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:08.007055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:08.007081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:08.007100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:08.007112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:08.007143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-05-13 03:12:08.016835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-05-13 03:12:08.017030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-05-13 03:12:08.017056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-05-13 03:12:08.017070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-05-13 03:12:08.017082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.263 [2024-05-13 03:12:08.017111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.264 [2024-05-13 03:12:08.026931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.264 [2024-05-13 03:12:08.027126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.264 [2024-05-13 03:12:08.027151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.264 [2024-05-13 03:12:08.027166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.264 [2024-05-13 03:12:08.027178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.264 [2024-05-13 03:12:08.027207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.264 qpair failed and we were unable to recover it. 00:31:17.264 [2024-05-13 03:12:08.036885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.264 [2024-05-13 03:12:08.037080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.264 [2024-05-13 03:12:08.037106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.264 [2024-05-13 03:12:08.037119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.264 [2024-05-13 03:12:08.037131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.264 [2024-05-13 03:12:08.037167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.264 qpair failed and we were unable to recover it. 00:31:17.264 [2024-05-13 03:12:08.046927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.264 [2024-05-13 03:12:08.047129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.264 [2024-05-13 03:12:08.047155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.264 [2024-05-13 03:12:08.047169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.264 [2024-05-13 03:12:08.047181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.264 [2024-05-13 03:12:08.047210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.264 qpair failed and we were unable to recover it. 00:31:17.264 [2024-05-13 03:12:08.056924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.264 [2024-05-13 03:12:08.057168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.264 [2024-05-13 03:12:08.057193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.264 [2024-05-13 03:12:08.057207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.264 [2024-05-13 03:12:08.057219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.264 [2024-05-13 03:12:08.057248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.264 qpair failed and we were unable to recover it. 00:31:17.523 [2024-05-13 03:12:08.067068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.523 [2024-05-13 03:12:08.067250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.523 [2024-05-13 03:12:08.067275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.523 [2024-05-13 03:12:08.067290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.523 [2024-05-13 03:12:08.067303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.523 [2024-05-13 03:12:08.067333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.523 qpair failed and we were unable to recover it. 00:31:17.523 [2024-05-13 03:12:08.077026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.523 [2024-05-13 03:12:08.077281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.523 [2024-05-13 03:12:08.077306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.523 [2024-05-13 03:12:08.077320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.523 [2024-05-13 03:12:08.077332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.523 [2024-05-13 03:12:08.077360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.523 qpair failed and we were unable to recover it. 00:31:17.523 [2024-05-13 03:12:08.087025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.523 [2024-05-13 03:12:08.087219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.523 [2024-05-13 03:12:08.087246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.523 [2024-05-13 03:12:08.087260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.523 [2024-05-13 03:12:08.087272] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.523 [2024-05-13 03:12:08.087314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.523 qpair failed and we were unable to recover it. 00:31:17.523 [2024-05-13 03:12:08.097198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.523 [2024-05-13 03:12:08.097395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.523 [2024-05-13 03:12:08.097421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.523 [2024-05-13 03:12:08.097435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.523 [2024-05-13 03:12:08.097448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.523 [2024-05-13 03:12:08.097477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.523 qpair failed and we were unable to recover it. 00:31:17.523 [2024-05-13 03:12:08.107140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.523 [2024-05-13 03:12:08.107389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.523 [2024-05-13 03:12:08.107415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.523 [2024-05-13 03:12:08.107429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.523 [2024-05-13 03:12:08.107442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.523 [2024-05-13 03:12:08.107471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.523 qpair failed and we were unable to recover it. 00:31:17.523 [2024-05-13 03:12:08.117094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.523 [2024-05-13 03:12:08.117331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.523 [2024-05-13 03:12:08.117357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.523 [2024-05-13 03:12:08.117371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.523 [2024-05-13 03:12:08.117384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.523 [2024-05-13 03:12:08.117413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.523 qpair failed and we were unable to recover it. 00:31:17.523 [2024-05-13 03:12:08.127120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.523 [2024-05-13 03:12:08.127313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.523 [2024-05-13 03:12:08.127338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.523 [2024-05-13 03:12:08.127353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.523 [2024-05-13 03:12:08.127372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.523 [2024-05-13 03:12:08.127402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.523 qpair failed and we were unable to recover it. 00:31:17.523 [2024-05-13 03:12:08.137146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.523 [2024-05-13 03:12:08.137376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.523 [2024-05-13 03:12:08.137401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.523 [2024-05-13 03:12:08.137415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.523 [2024-05-13 03:12:08.137428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.523 [2024-05-13 03:12:08.137457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.523 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.147202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.147384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.147409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.147423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.147436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.147465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.157328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.157532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.157557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.157572] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.157584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.157614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.167225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.167418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.167443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.167457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.167470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.167499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.177349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.177540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.177567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.177581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.177593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.177622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.187307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.187501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.187527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.187540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.187552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.187581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.197360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.197615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.197641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.197655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.197668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.197705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.207397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.207622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.207648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.207662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.207674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.207710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.217363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.217547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.217572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.217591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.217604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.217632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.227399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.227589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.227615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.227629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.227641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.227670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.237464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.237656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.237682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.237703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.237718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.237748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.247452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.247658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.247684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.247705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.247720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.247749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.257485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.257748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.257773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.257787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.257799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.257829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.267517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.267714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.267743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.267759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.267772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.524 [2024-05-13 03:12:08.267802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.524 qpair failed and we were unable to recover it. 00:31:17.524 [2024-05-13 03:12:08.277630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.524 [2024-05-13 03:12:08.277827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.524 [2024-05-13 03:12:08.277853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.524 [2024-05-13 03:12:08.277867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.524 [2024-05-13 03:12:08.277879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.525 [2024-05-13 03:12:08.277909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-05-13 03:12:08.287571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-05-13 03:12:08.287755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-05-13 03:12:08.287781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-05-13 03:12:08.287795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-05-13 03:12:08.287807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.525 [2024-05-13 03:12:08.287836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-05-13 03:12:08.297628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-05-13 03:12:08.297829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-05-13 03:12:08.297855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-05-13 03:12:08.297870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-05-13 03:12:08.297885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.525 [2024-05-13 03:12:08.297915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-05-13 03:12:08.307649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-05-13 03:12:08.307850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-05-13 03:12:08.307881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-05-13 03:12:08.307896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-05-13 03:12:08.307909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.525 [2024-05-13 03:12:08.307939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-05-13 03:12:08.317688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-05-13 03:12:08.317884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-05-13 03:12:08.317910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-05-13 03:12:08.317924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-05-13 03:12:08.317937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.525 [2024-05-13 03:12:08.317978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.784 [2024-05-13 03:12:08.327680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.784 [2024-05-13 03:12:08.327921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.784 [2024-05-13 03:12:08.327947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.784 [2024-05-13 03:12:08.327960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.784 [2024-05-13 03:12:08.327973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.784 [2024-05-13 03:12:08.328002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.784 qpair failed and we were unable to recover it. 00:31:17.784 [2024-05-13 03:12:08.337711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.784 [2024-05-13 03:12:08.337897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.784 [2024-05-13 03:12:08.337923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.784 [2024-05-13 03:12:08.337937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.784 [2024-05-13 03:12:08.337950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.784 [2024-05-13 03:12:08.337980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.784 qpair failed and we were unable to recover it. 00:31:17.784 [2024-05-13 03:12:08.347786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.784 [2024-05-13 03:12:08.348002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.784 [2024-05-13 03:12:08.348028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.784 [2024-05-13 03:12:08.348042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.784 [2024-05-13 03:12:08.348055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.784 [2024-05-13 03:12:08.348089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.784 qpair failed and we were unable to recover it. 00:31:17.784 [2024-05-13 03:12:08.357751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.784 [2024-05-13 03:12:08.357938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.784 [2024-05-13 03:12:08.357963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.784 [2024-05-13 03:12:08.357978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.784 [2024-05-13 03:12:08.357990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.784 [2024-05-13 03:12:08.358019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.784 qpair failed and we were unable to recover it. 00:31:17.784 [2024-05-13 03:12:08.367883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.784 [2024-05-13 03:12:08.368069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.784 [2024-05-13 03:12:08.368094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.368108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.368121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.368149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.377829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.378017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.378042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.378056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.378069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.378097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.387849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.388040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.388065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.388079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.388091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.388121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.397893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.398096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.398127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.398142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.398155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.398184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.407943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.408177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.408203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.408217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.408229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.408258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.417958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.418154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.418179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.418193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.418206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.418235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.427977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.428167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.428192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.428206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.428218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.428248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.438071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.438282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.438308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.438322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.438334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.438369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.448046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.448248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.448273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.448287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.448300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.448329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.458115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.458305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.458332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.458347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.458360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.458403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.468069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.468256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.468281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.468296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.468308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.468338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.478226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.478433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.478459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.478474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.478487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.478517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.488155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.488378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.488408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.488423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.488436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.488465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.498152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.785 [2024-05-13 03:12:08.498341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.785 [2024-05-13 03:12:08.498366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.785 [2024-05-13 03:12:08.498381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.785 [2024-05-13 03:12:08.498393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.785 [2024-05-13 03:12:08.498422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.785 qpair failed and we were unable to recover it. 00:31:17.785 [2024-05-13 03:12:08.508203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.786 [2024-05-13 03:12:08.508390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.786 [2024-05-13 03:12:08.508416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.786 [2024-05-13 03:12:08.508430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.786 [2024-05-13 03:12:08.508443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.786 [2024-05-13 03:12:08.508472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.786 qpair failed and we were unable to recover it. 00:31:17.786 [2024-05-13 03:12:08.518233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.786 [2024-05-13 03:12:08.518427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.786 [2024-05-13 03:12:08.518452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.786 [2024-05-13 03:12:08.518466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.786 [2024-05-13 03:12:08.518478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.786 [2024-05-13 03:12:08.518508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.786 qpair failed and we were unable to recover it. 00:31:17.786 [2024-05-13 03:12:08.528241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.786 [2024-05-13 03:12:08.528438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.786 [2024-05-13 03:12:08.528463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.786 [2024-05-13 03:12:08.528477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.786 [2024-05-13 03:12:08.528495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.786 [2024-05-13 03:12:08.528524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.786 qpair failed and we were unable to recover it. 00:31:17.786 [2024-05-13 03:12:08.538321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.786 [2024-05-13 03:12:08.538517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.786 [2024-05-13 03:12:08.538542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.786 [2024-05-13 03:12:08.538557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.786 [2024-05-13 03:12:08.538569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.786 [2024-05-13 03:12:08.538598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.786 qpair failed and we were unable to recover it. 00:31:17.786 [2024-05-13 03:12:08.548310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.786 [2024-05-13 03:12:08.548513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.786 [2024-05-13 03:12:08.548538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.786 [2024-05-13 03:12:08.548553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.786 [2024-05-13 03:12:08.548565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.786 [2024-05-13 03:12:08.548595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.786 qpair failed and we were unable to recover it. 00:31:17.786 [2024-05-13 03:12:08.558357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.786 [2024-05-13 03:12:08.558562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.786 [2024-05-13 03:12:08.558588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.786 [2024-05-13 03:12:08.558602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.786 [2024-05-13 03:12:08.558614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.786 [2024-05-13 03:12:08.558644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.786 qpair failed and we were unable to recover it. 00:31:17.786 [2024-05-13 03:12:08.568378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.786 [2024-05-13 03:12:08.568570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.786 [2024-05-13 03:12:08.568597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.786 [2024-05-13 03:12:08.568614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.786 [2024-05-13 03:12:08.568626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.786 [2024-05-13 03:12:08.568656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.786 qpair failed and we were unable to recover it. 00:31:17.786 [2024-05-13 03:12:08.578453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.786 [2024-05-13 03:12:08.578650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.786 [2024-05-13 03:12:08.578686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.786 [2024-05-13 03:12:08.578711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.786 [2024-05-13 03:12:08.578725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:17.786 [2024-05-13 03:12:08.578756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.786 qpair failed and we were unable to recover it. 00:31:18.045 [2024-05-13 03:12:08.588431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.045 [2024-05-13 03:12:08.588657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.045 [2024-05-13 03:12:08.588683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.045 [2024-05-13 03:12:08.588715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.045 [2024-05-13 03:12:08.588730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.045 [2024-05-13 03:12:08.588760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.045 qpair failed and we were unable to recover it. 00:31:18.045 [2024-05-13 03:12:08.598481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.045 [2024-05-13 03:12:08.598679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.045 [2024-05-13 03:12:08.598718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.045 [2024-05-13 03:12:08.598733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.045 [2024-05-13 03:12:08.598746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.045 [2024-05-13 03:12:08.598788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.045 qpair failed and we were unable to recover it. 00:31:18.045 [2024-05-13 03:12:08.608480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.045 [2024-05-13 03:12:08.608677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.045 [2024-05-13 03:12:08.608712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.045 [2024-05-13 03:12:08.608728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.045 [2024-05-13 03:12:08.608740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.045 [2024-05-13 03:12:08.608769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.045 qpair failed and we were unable to recover it. 00:31:18.045 [2024-05-13 03:12:08.618515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.045 [2024-05-13 03:12:08.618753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.045 [2024-05-13 03:12:08.618781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.045 [2024-05-13 03:12:08.618801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.045 [2024-05-13 03:12:08.618814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.045 [2024-05-13 03:12:08.618843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.045 qpair failed and we were unable to recover it. 00:31:18.045 [2024-05-13 03:12:08.628665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.045 [2024-05-13 03:12:08.628896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.045 [2024-05-13 03:12:08.628923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.045 [2024-05-13 03:12:08.628940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.045 [2024-05-13 03:12:08.628952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.045 [2024-05-13 03:12:08.628982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.045 qpair failed and we were unable to recover it. 00:31:18.045 [2024-05-13 03:12:08.638579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.638780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.638806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.638820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.638833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.638863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.648592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.648798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.648824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.648838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.648850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.648880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.658626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.658833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.658859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.658873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.658887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.658916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.668761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.668951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.668976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.668990] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.669003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.669032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.678712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.678903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.678928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.678943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.678955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.678985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.688753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.688948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.688973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.688987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.689000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.689029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.698750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.698944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.698969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.698984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.698996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.699025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.708820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.709008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.709033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.709052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.709065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.709094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.718853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.719052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.719077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.719090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.719102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.719132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.728811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.729000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.729026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.729040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.729052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.729081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.738966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.739158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.739184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.739198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.739213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.739243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.748897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.749100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.749125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.749139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.749152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.749181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.758923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.759108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.759135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.759149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.759160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.759189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.046 [2024-05-13 03:12:08.768968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.046 [2024-05-13 03:12:08.769169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.046 [2024-05-13 03:12:08.769198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.046 [2024-05-13 03:12:08.769213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.046 [2024-05-13 03:12:08.769226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.046 [2024-05-13 03:12:08.769256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.046 qpair failed and we were unable to recover it. 00:31:18.047 [2024-05-13 03:12:08.778980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.047 [2024-05-13 03:12:08.779210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.047 [2024-05-13 03:12:08.779243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.047 [2024-05-13 03:12:08.779256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.047 [2024-05-13 03:12:08.779268] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.047 [2024-05-13 03:12:08.779297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.047 qpair failed and we were unable to recover it. 00:31:18.047 [2024-05-13 03:12:08.789005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.047 [2024-05-13 03:12:08.789192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.047 [2024-05-13 03:12:08.789218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.047 [2024-05-13 03:12:08.789232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.047 [2024-05-13 03:12:08.789244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.047 [2024-05-13 03:12:08.789273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.047 qpair failed and we were unable to recover it. 00:31:18.047 [2024-05-13 03:12:08.799053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.047 [2024-05-13 03:12:08.799279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.047 [2024-05-13 03:12:08.799310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.047 [2024-05-13 03:12:08.799325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.047 [2024-05-13 03:12:08.799343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.047 [2024-05-13 03:12:08.799372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.047 qpair failed and we were unable to recover it. 00:31:18.047 [2024-05-13 03:12:08.809054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.047 [2024-05-13 03:12:08.809266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.047 [2024-05-13 03:12:08.809291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.047 [2024-05-13 03:12:08.809306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.047 [2024-05-13 03:12:08.809318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.047 [2024-05-13 03:12:08.809348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.047 qpair failed and we were unable to recover it. 00:31:18.047 [2024-05-13 03:12:08.819076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.047 [2024-05-13 03:12:08.819274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.047 [2024-05-13 03:12:08.819299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.047 [2024-05-13 03:12:08.819313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.047 [2024-05-13 03:12:08.819326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.047 [2024-05-13 03:12:08.819356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.047 qpair failed and we were unable to recover it. 00:31:18.047 [2024-05-13 03:12:08.829201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.047 [2024-05-13 03:12:08.829395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.047 [2024-05-13 03:12:08.829421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.047 [2024-05-13 03:12:08.829435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.047 [2024-05-13 03:12:08.829447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.047 [2024-05-13 03:12:08.829477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.047 qpair failed and we were unable to recover it. 00:31:18.047 [2024-05-13 03:12:08.839149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.047 [2024-05-13 03:12:08.839342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.047 [2024-05-13 03:12:08.839377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.047 [2024-05-13 03:12:08.839391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.047 [2024-05-13 03:12:08.839404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.047 [2024-05-13 03:12:08.839441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.047 qpair failed and we were unable to recover it. 00:31:18.306 [2024-05-13 03:12:08.849162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.306 [2024-05-13 03:12:08.849354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.306 [2024-05-13 03:12:08.849380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.306 [2024-05-13 03:12:08.849394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.306 [2024-05-13 03:12:08.849406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.306 [2024-05-13 03:12:08.849435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.306 qpair failed and we were unable to recover it. 00:31:18.306 [2024-05-13 03:12:08.859233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.306 [2024-05-13 03:12:08.859464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.306 [2024-05-13 03:12:08.859489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.306 [2024-05-13 03:12:08.859503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.306 [2024-05-13 03:12:08.859515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.307 [2024-05-13 03:12:08.859545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.869247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.869429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.869455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.869469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.869481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.307 [2024-05-13 03:12:08.869510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.879265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.879489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.879514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.879528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.879541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.307 [2024-05-13 03:12:08.879569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.889327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.889544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.889573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.889588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.889600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.307 [2024-05-13 03:12:08.889629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.899432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.899639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.899665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.899679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.899691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.307 [2024-05-13 03:12:08.899743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.909360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.909557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.909583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.909597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.909609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.307 [2024-05-13 03:12:08.909638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.919357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.919547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.919572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.919586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.919598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.307 [2024-05-13 03:12:08.919627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.929399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.929585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.929611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.929625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.929643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:18.307 [2024-05-13 03:12:08.929672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.939453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.939642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.939673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.939705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.939722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.307 [2024-05-13 03:12:08.939752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.949512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.949721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.949749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.949763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.949775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.307 [2024-05-13 03:12:08.949804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.959505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.959712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.959739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.959753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.959766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.307 [2024-05-13 03:12:08.959794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.307 [2024-05-13 03:12:08.969568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.307 [2024-05-13 03:12:08.969772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.307 [2024-05-13 03:12:08.969798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.307 [2024-05-13 03:12:08.969812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.307 [2024-05-13 03:12:08.969824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.307 [2024-05-13 03:12:08.969852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.307 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:08.979586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:08.979817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:08.979843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:08.979857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:08.979870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:08.979898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:08.989597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:08.989787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:08.989814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:08.989828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:08.989840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:08.989868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:08.999653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:08.999863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:08.999889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:08.999903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:08.999915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:08.999943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.009635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.009844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.009870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.009884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.009896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.009925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.019672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.019863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.019889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.019903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.019920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.019950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.029702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.029915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.029942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.029957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.029969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.030001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.039741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.039935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.039961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.039975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.039993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.040021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.049851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.050041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.050066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.050080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.050093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.050121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.059771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.060026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.060052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.060066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.060078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.060106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.069846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.070039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.070065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.070079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.070091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.070119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.079847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.080035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.080060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.080074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.080086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.080114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.089840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.090046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.090073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.090087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.090098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.090126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.308 [2024-05-13 03:12:09.099870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.308 [2024-05-13 03:12:09.100068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.308 [2024-05-13 03:12:09.100094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.308 [2024-05-13 03:12:09.100108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.308 [2024-05-13 03:12:09.100120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.308 [2024-05-13 03:12:09.100147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.308 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.109901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.110092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.110120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.110140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.110154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.110183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.119958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.120196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.120223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.120237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.120250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.120279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.129949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.130145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.130171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.130185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.130197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.130225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.139976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.140181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.140207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.140221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.140233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.140261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.150045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.150229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.150254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.150269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.150281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.150309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.160042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.160243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.160268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.160283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.160296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.160323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.170056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.170243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.170268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.170282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.170294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.170322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.180091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.180273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.180298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.180312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.180324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.180351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.190109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.190306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.190330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.190344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.190356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.190384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.200154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.200349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.200373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.200392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.200405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.200433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.210199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.210389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.210415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.210429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.210441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.210468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.220235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.220466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.220491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.220505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.220517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.220544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.230328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.230520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.230546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.568 [2024-05-13 03:12:09.230560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.568 [2024-05-13 03:12:09.230572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.568 [2024-05-13 03:12:09.230599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.568 qpair failed and we were unable to recover it. 00:31:18.568 [2024-05-13 03:12:09.240337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.568 [2024-05-13 03:12:09.240534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.568 [2024-05-13 03:12:09.240560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.240578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.240591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.240619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.250326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.250522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.250548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.250562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.250575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.250602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.260335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.260523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.260548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.260562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.260575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.260602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.270444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.270637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.270662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.270676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.270688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.270724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.280409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.280605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.280632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.280646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.280658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.280685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.290491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.290683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.290717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.290738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.290751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.290779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.300432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.300609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.300634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.300648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.300660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.300688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.310481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.310673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.310703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.310719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.310731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.310759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.320486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.320674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.320707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.320723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.320735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.320763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.330500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.330694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.330725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.330739] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.330751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.330779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.340568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.340764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.340798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.340812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.340824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.340851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.350558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.350750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.350775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.350789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.350801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.350829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.569 [2024-05-13 03:12:09.360736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.569 [2024-05-13 03:12:09.360929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.569 [2024-05-13 03:12:09.360955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.569 [2024-05-13 03:12:09.360969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.569 [2024-05-13 03:12:09.360981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.569 [2024-05-13 03:12:09.361009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.569 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.370610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.370802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.370831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.370846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.370858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.370887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.380662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.380861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.380889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.380908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.380921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.380950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.390702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.390923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.390949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.390963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.390976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.391004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.400736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.400946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.400972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.400986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.401001] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.401029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.410817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.411008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.411034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.411048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.411060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.411087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.420797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.420986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.421012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.421027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.421039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.421067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.430789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.430971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.430996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.431010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.431021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.431049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.440929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.441125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.441150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.441164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.441176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.441203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.450891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.451113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.451138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.451152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.451164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.451191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.460874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.461062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.829 [2024-05-13 03:12:09.461087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.829 [2024-05-13 03:12:09.461101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.829 [2024-05-13 03:12:09.461113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.829 [2024-05-13 03:12:09.461140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.829 qpair failed and we were unable to recover it. 00:31:18.829 [2024-05-13 03:12:09.470892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.829 [2024-05-13 03:12:09.471074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.471104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.471119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.471131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.471159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.480986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.481214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.481240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.481254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.481266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.481294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.490973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.491160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.491186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.491200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.491213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.491241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.501019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.501259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.501284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.501298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.501310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.501338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.511050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.511274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.511300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.511314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.511326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.511353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.521106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.521296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.521322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.521336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.521348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.521376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.531120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.531331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.531356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.531370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.531382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.531410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.541132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.541317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.541342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.541356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.541369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.541396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.551132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.551320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.551345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.551359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.551371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.551398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.561204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.561415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.561445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.561460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.561472] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.561499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.571263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.571462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.571487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.571501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.571513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.571541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.581290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.581476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.581501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.581515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.581527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.581554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.591269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.591450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.591476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.591491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.591504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.591531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.830 qpair failed and we were unable to recover it. 00:31:18.830 [2024-05-13 03:12:09.601360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.830 [2024-05-13 03:12:09.601590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.830 [2024-05-13 03:12:09.601615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.830 [2024-05-13 03:12:09.601629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.830 [2024-05-13 03:12:09.601641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.830 [2024-05-13 03:12:09.601675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.831 qpair failed and we were unable to recover it. 00:31:18.831 [2024-05-13 03:12:09.611350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.831 [2024-05-13 03:12:09.611540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.831 [2024-05-13 03:12:09.611565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.831 [2024-05-13 03:12:09.611579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.831 [2024-05-13 03:12:09.611591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.831 [2024-05-13 03:12:09.611619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.831 qpair failed and we were unable to recover it. 00:31:18.831 [2024-05-13 03:12:09.621352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.831 [2024-05-13 03:12:09.621533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.831 [2024-05-13 03:12:09.621559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.831 [2024-05-13 03:12:09.621573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.831 [2024-05-13 03:12:09.621585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:18.831 [2024-05-13 03:12:09.621613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.831 qpair failed and we were unable to recover it. 00:31:19.090 [2024-05-13 03:12:09.631452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.090 [2024-05-13 03:12:09.631676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.090 [2024-05-13 03:12:09.631712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.090 [2024-05-13 03:12:09.631731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.090 [2024-05-13 03:12:09.631744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.090 [2024-05-13 03:12:09.631772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.090 qpair failed and we were unable to recover it. 00:31:19.090 [2024-05-13 03:12:09.641415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.090 [2024-05-13 03:12:09.641606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.090 [2024-05-13 03:12:09.641633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.090 [2024-05-13 03:12:09.641648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.090 [2024-05-13 03:12:09.641660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.090 [2024-05-13 03:12:09.641688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.090 qpair failed and we were unable to recover it. 00:31:19.090 [2024-05-13 03:12:09.651477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.090 [2024-05-13 03:12:09.651681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.090 [2024-05-13 03:12:09.651723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.090 [2024-05-13 03:12:09.651749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.090 [2024-05-13 03:12:09.651761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.090 [2024-05-13 03:12:09.651790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.090 qpair failed and we were unable to recover it. 00:31:19.090 [2024-05-13 03:12:09.661485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.090 [2024-05-13 03:12:09.661691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.090 [2024-05-13 03:12:09.661723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.090 [2024-05-13 03:12:09.661737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.090 [2024-05-13 03:12:09.661749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.090 [2024-05-13 03:12:09.661778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.090 qpair failed and we were unable to recover it. 00:31:19.090 [2024-05-13 03:12:09.671518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.090 [2024-05-13 03:12:09.671715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.090 [2024-05-13 03:12:09.671741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.090 [2024-05-13 03:12:09.671755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.090 [2024-05-13 03:12:09.671768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.090 [2024-05-13 03:12:09.671796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.090 qpair failed and we were unable to recover it. 00:31:19.090 [2024-05-13 03:12:09.681554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.090 [2024-05-13 03:12:09.681760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.090 [2024-05-13 03:12:09.681785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.090 [2024-05-13 03:12:09.681799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.090 [2024-05-13 03:12:09.681811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.090 [2024-05-13 03:12:09.681839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.090 qpair failed and we were unable to recover it. 00:31:19.090 [2024-05-13 03:12:09.691557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.090 [2024-05-13 03:12:09.691759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.090 [2024-05-13 03:12:09.691787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.090 [2024-05-13 03:12:09.691804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.090 [2024-05-13 03:12:09.691817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.090 [2024-05-13 03:12:09.691851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.090 qpair failed and we were unable to recover it. 00:31:19.090 [2024-05-13 03:12:09.701588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.090 [2024-05-13 03:12:09.701787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.090 [2024-05-13 03:12:09.701814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.090 [2024-05-13 03:12:09.701829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.090 [2024-05-13 03:12:09.701841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.090 [2024-05-13 03:12:09.701869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.090 qpair failed and we were unable to recover it. 00:31:19.090 [2024-05-13 03:12:09.711630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.090 [2024-05-13 03:12:09.711825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.090 [2024-05-13 03:12:09.711851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.090 [2024-05-13 03:12:09.711865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.090 [2024-05-13 03:12:09.711895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.090 [2024-05-13 03:12:09.711926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.721650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.721856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.721882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.721896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.721908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.721936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.731690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.731907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.731936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.731950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.731963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.731992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.741737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.741930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.741961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.741976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.741989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.742017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.751773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.751966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.751994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.752009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.752022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.752050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.761853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.762051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.762076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.762090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.762102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.762130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.771871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.772062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.772087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.772101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.772113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.772141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.781854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.782049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.782075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.782088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.782106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.782134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.791903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.792094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.792119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.792134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.792146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.792173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.801894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.802127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.802152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.802165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.802177] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.802205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.811920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.812106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.812131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.812145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.812157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.812184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.821993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.822211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.822236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.822250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.822262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.822290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.831950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.832126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.832156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.832171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.832183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.832209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.841977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.842166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.842191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.842205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.842217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.091 [2024-05-13 03:12:09.842244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.091 qpair failed and we were unable to recover it. 00:31:19.091 [2024-05-13 03:12:09.852029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.091 [2024-05-13 03:12:09.852217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.091 [2024-05-13 03:12:09.852243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.091 [2024-05-13 03:12:09.852257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.091 [2024-05-13 03:12:09.852269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.092 [2024-05-13 03:12:09.852296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.092 qpair failed and we were unable to recover it. 00:31:19.092 [2024-05-13 03:12:09.862054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.092 [2024-05-13 03:12:09.862251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.092 [2024-05-13 03:12:09.862279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.092 [2024-05-13 03:12:09.862293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.092 [2024-05-13 03:12:09.862306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.092 [2024-05-13 03:12:09.862342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.092 qpair failed and we were unable to recover it. 00:31:19.092 [2024-05-13 03:12:09.872151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.092 [2024-05-13 03:12:09.872331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.092 [2024-05-13 03:12:09.872357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.092 [2024-05-13 03:12:09.872371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.092 [2024-05-13 03:12:09.872388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.092 [2024-05-13 03:12:09.872416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.092 qpair failed and we were unable to recover it. 00:31:19.092 [2024-05-13 03:12:09.882084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.092 [2024-05-13 03:12:09.882292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.092 [2024-05-13 03:12:09.882318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.092 [2024-05-13 03:12:09.882332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.092 [2024-05-13 03:12:09.882344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.092 [2024-05-13 03:12:09.882372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.092 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.892140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.892328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.892355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.892369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.892382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.892410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.902146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.902338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.902365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.902380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.902392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.902421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.912210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.912431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.912457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.912471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.912483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.912511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.922265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.922475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.922500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.922514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.922526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.922554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.932324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.932510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.932535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.932550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.932562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.932589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.942292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.942476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.942502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.942516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.942529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.942557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.952296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.952514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.952539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.952553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.952565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.952592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.962434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.962631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.962657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.962671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.962689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.962724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.972337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.972525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.972550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.972564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.972576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.972604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.982463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.982648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.982673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.982687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.982707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.982736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.352 [2024-05-13 03:12:09.992389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.352 [2024-05-13 03:12:09.992583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.352 [2024-05-13 03:12:09.992608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.352 [2024-05-13 03:12:09.992622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.352 [2024-05-13 03:12:09.992635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.352 [2024-05-13 03:12:09.992662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.352 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.002528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.002728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.002754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.002767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.002780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.002808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.012468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.012661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.012686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.012708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.012721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.012749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.022505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.022733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.022758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.022773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.022785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.022813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.032557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.032752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.032777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.032791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.032803] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.032831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.042538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.042730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.042755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.042769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.042781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.042809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.052645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.052884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.052912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.052927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.052947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.052978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.062682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.062889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.062915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.062929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.062942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.062970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.072631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.072825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.072851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.072866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.072878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.072907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.082648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.082887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.082913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.082928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.082940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.082968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.092687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.092883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.092913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.092928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.092941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:19.353 [2024-05-13 03:12:10.092969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.102708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.102904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.102937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.102953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.102966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.353 [2024-05-13 03:12:10.102998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.112774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.112998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.113025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.113039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.113052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.353 [2024-05-13 03:12:10.113082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.353 qpair failed and we were unable to recover it. 00:31:19.353 [2024-05-13 03:12:10.122771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.353 [2024-05-13 03:12:10.122970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.353 [2024-05-13 03:12:10.122997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.353 [2024-05-13 03:12:10.123011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.353 [2024-05-13 03:12:10.123024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.354 [2024-05-13 03:12:10.123054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.354 qpair failed and we were unable to recover it. 00:31:19.354 [2024-05-13 03:12:10.132789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.354 [2024-05-13 03:12:10.132984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.354 [2024-05-13 03:12:10.133011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.354 [2024-05-13 03:12:10.133025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.354 [2024-05-13 03:12:10.133038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.354 [2024-05-13 03:12:10.133081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.354 qpair failed and we were unable to recover it. 00:31:19.354 [2024-05-13 03:12:10.142823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.354 [2024-05-13 03:12:10.143056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.354 [2024-05-13 03:12:10.143083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.354 [2024-05-13 03:12:10.143103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.354 [2024-05-13 03:12:10.143116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.354 [2024-05-13 03:12:10.143147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.354 qpair failed and we were unable to recover it. 00:31:19.614 [2024-05-13 03:12:10.152871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.614 [2024-05-13 03:12:10.153059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.614 [2024-05-13 03:12:10.153086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.614 [2024-05-13 03:12:10.153100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.614 [2024-05-13 03:12:10.153112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.614 [2024-05-13 03:12:10.153142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.614 qpair failed and we were unable to recover it. 00:31:19.614 [2024-05-13 03:12:10.162893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.614 [2024-05-13 03:12:10.163086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.614 [2024-05-13 03:12:10.163114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.614 [2024-05-13 03:12:10.163128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.614 [2024-05-13 03:12:10.163140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.614 [2024-05-13 03:12:10.163169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.614 qpair failed and we were unable to recover it. 00:31:19.614 [2024-05-13 03:12:10.172903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.614 [2024-05-13 03:12:10.173094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.614 [2024-05-13 03:12:10.173121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.614 [2024-05-13 03:12:10.173135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.614 [2024-05-13 03:12:10.173147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.614 [2024-05-13 03:12:10.173176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.614 qpair failed and we were unable to recover it. 00:31:19.614 [2024-05-13 03:12:10.183038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.614 [2024-05-13 03:12:10.183222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.614 [2024-05-13 03:12:10.183248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.614 [2024-05-13 03:12:10.183263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.614 [2024-05-13 03:12:10.183275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.614 [2024-05-13 03:12:10.183304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.614 qpair failed and we were unable to recover it. 00:31:19.614 [2024-05-13 03:12:10.193041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.614 [2024-05-13 03:12:10.193232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.614 [2024-05-13 03:12:10.193260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.614 [2024-05-13 03:12:10.193276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.614 [2024-05-13 03:12:10.193288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.614 [2024-05-13 03:12:10.193318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.614 qpair failed and we were unable to recover it. 00:31:19.614 [2024-05-13 03:12:10.203028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.614 [2024-05-13 03:12:10.203223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.614 [2024-05-13 03:12:10.203249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.614 [2024-05-13 03:12:10.203264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.614 [2024-05-13 03:12:10.203277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.614 [2024-05-13 03:12:10.203306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.614 qpair failed and we were unable to recover it. 00:31:19.614 [2024-05-13 03:12:10.213032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.614 [2024-05-13 03:12:10.213245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.614 [2024-05-13 03:12:10.213272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.614 [2024-05-13 03:12:10.213286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.614 [2024-05-13 03:12:10.213298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.614 [2024-05-13 03:12:10.213328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.614 qpair failed and we were unable to recover it. 00:31:19.614 [2024-05-13 03:12:10.223106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.614 [2024-05-13 03:12:10.223349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.614 [2024-05-13 03:12:10.223375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.614 [2024-05-13 03:12:10.223390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.614 [2024-05-13 03:12:10.223405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.223435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.233044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.233235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.233266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.233282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.233294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.233324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.243189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.243392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.243418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.243433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.243445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.243474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.253194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.253386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.253413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.253427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.253439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.253469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.263170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.263400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.263426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.263441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.263453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.263482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.273221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.273427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.273453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.273470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.273482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.273520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.283229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.283419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.283447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.283462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.283475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.283504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.293241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.293433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.293459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.293473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.293486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.293516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.303252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.303442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.303469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.303484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.303497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.303526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.313301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.313486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.313512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.313527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.313539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.313568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.323454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.323672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.323712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.323729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.323741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.323771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.333353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.333588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.333614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.333629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.333641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.333671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.343368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.343556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.343582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.343596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.343609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.343638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.353420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.353613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.353640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.353654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.353667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.615 [2024-05-13 03:12:10.353704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.615 qpair failed and we were unable to recover it. 00:31:19.615 [2024-05-13 03:12:10.363540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.615 [2024-05-13 03:12:10.363745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.615 [2024-05-13 03:12:10.363772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.615 [2024-05-13 03:12:10.363786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.615 [2024-05-13 03:12:10.363798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.616 [2024-05-13 03:12:10.363834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.616 qpair failed and we were unable to recover it. 00:31:19.616 [2024-05-13 03:12:10.373453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.616 [2024-05-13 03:12:10.373637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.616 [2024-05-13 03:12:10.373664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.616 [2024-05-13 03:12:10.373678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.616 [2024-05-13 03:12:10.373690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.616 [2024-05-13 03:12:10.373728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.616 qpair failed and we were unable to recover it. 00:31:19.616 [2024-05-13 03:12:10.383508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.616 [2024-05-13 03:12:10.383694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.616 [2024-05-13 03:12:10.383728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.616 [2024-05-13 03:12:10.383742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.616 [2024-05-13 03:12:10.383755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.616 [2024-05-13 03:12:10.383784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.616 qpair failed and we were unable to recover it. 00:31:19.616 [2024-05-13 03:12:10.393508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.616 [2024-05-13 03:12:10.393712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.616 [2024-05-13 03:12:10.393739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.616 [2024-05-13 03:12:10.393753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.616 [2024-05-13 03:12:10.393766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.616 [2024-05-13 03:12:10.393795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.616 qpair failed and we were unable to recover it. 00:31:19.616 [2024-05-13 03:12:10.403592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.616 [2024-05-13 03:12:10.403792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.616 [2024-05-13 03:12:10.403819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.616 [2024-05-13 03:12:10.403833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.616 [2024-05-13 03:12:10.403846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.616 [2024-05-13 03:12:10.403876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.616 qpair failed and we were unable to recover it. 00:31:19.616 [2024-05-13 03:12:10.413684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.616 [2024-05-13 03:12:10.413933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.616 [2024-05-13 03:12:10.413960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.616 [2024-05-13 03:12:10.413974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.616 [2024-05-13 03:12:10.413986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.616 [2024-05-13 03:12:10.414016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.616 qpair failed and we were unable to recover it. 00:31:19.875 [2024-05-13 03:12:10.423591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.875 [2024-05-13 03:12:10.423785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.875 [2024-05-13 03:12:10.423812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.875 [2024-05-13 03:12:10.423827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.875 [2024-05-13 03:12:10.423839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.875 [2024-05-13 03:12:10.423869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.875 qpair failed and we were unable to recover it. 00:31:19.875 [2024-05-13 03:12:10.433663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.875 [2024-05-13 03:12:10.433862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.875 [2024-05-13 03:12:10.433888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.875 [2024-05-13 03:12:10.433903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.875 [2024-05-13 03:12:10.433916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.875 [2024-05-13 03:12:10.433945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.875 qpair failed and we were unable to recover it. 00:31:19.875 [2024-05-13 03:12:10.443675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.875 [2024-05-13 03:12:10.443882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.875 [2024-05-13 03:12:10.443908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.875 [2024-05-13 03:12:10.443923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.875 [2024-05-13 03:12:10.443935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.875 [2024-05-13 03:12:10.443965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.875 qpair failed and we were unable to recover it. 00:31:19.875 [2024-05-13 03:12:10.453704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.875 [2024-05-13 03:12:10.453890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.875 [2024-05-13 03:12:10.453916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.875 [2024-05-13 03:12:10.453930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.875 [2024-05-13 03:12:10.453948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.875 [2024-05-13 03:12:10.453978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.875 qpair failed and we were unable to recover it. 00:31:19.875 [2024-05-13 03:12:10.463754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.875 [2024-05-13 03:12:10.463939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.875 [2024-05-13 03:12:10.463966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.875 [2024-05-13 03:12:10.463980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.875 [2024-05-13 03:12:10.463992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.875 [2024-05-13 03:12:10.464034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.875 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.473768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.473965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.473991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.474005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.474017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.474047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.483796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.483990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.484016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.484030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.484042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.484072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.493806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.493999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.494027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.494041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.494054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.494083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.503885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.504092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.504119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.504134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.504146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.504175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.513880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.514068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.514095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.514108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.514121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.514150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.523951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.524173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.524199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.524214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.524226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.524256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.533901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.534089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.534116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.534131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.534143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.534172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.543974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.544167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.544193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.544213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.544227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.544270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.553995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.554232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.554258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.554272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.554285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.554315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.564021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.564217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.564243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.564257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.564270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.564299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.574025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.574227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.574253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.574268] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.574280] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.574309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.584093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.584277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.584303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.584317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.584329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.584359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.594157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.594345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.594372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.594386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.876 [2024-05-13 03:12:10.594398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.876 [2024-05-13 03:12:10.594428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.876 qpair failed and we were unable to recover it. 00:31:19.876 [2024-05-13 03:12:10.604162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.876 [2024-05-13 03:12:10.604398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.876 [2024-05-13 03:12:10.604424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.876 [2024-05-13 03:12:10.604439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.877 [2024-05-13 03:12:10.604451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.877 [2024-05-13 03:12:10.604481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.877 qpair failed and we were unable to recover it. 00:31:19.877 [2024-05-13 03:12:10.614276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.877 [2024-05-13 03:12:10.614458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.877 [2024-05-13 03:12:10.614485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.877 [2024-05-13 03:12:10.614499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.877 [2024-05-13 03:12:10.614511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.877 [2024-05-13 03:12:10.614541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.877 qpair failed and we were unable to recover it. 00:31:19.877 [2024-05-13 03:12:10.624205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.877 [2024-05-13 03:12:10.624399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.877 [2024-05-13 03:12:10.624425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.877 [2024-05-13 03:12:10.624439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.877 [2024-05-13 03:12:10.624452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.877 [2024-05-13 03:12:10.624494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.877 qpair failed and we were unable to recover it. 00:31:19.877 [2024-05-13 03:12:10.634282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.877 [2024-05-13 03:12:10.634502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.877 [2024-05-13 03:12:10.634529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.877 [2024-05-13 03:12:10.634549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.877 [2024-05-13 03:12:10.634561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.877 [2024-05-13 03:12:10.634590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.877 qpair failed and we were unable to recover it. 00:31:19.877 [2024-05-13 03:12:10.644243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.877 [2024-05-13 03:12:10.644428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.877 [2024-05-13 03:12:10.644455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.877 [2024-05-13 03:12:10.644469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.877 [2024-05-13 03:12:10.644482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.877 [2024-05-13 03:12:10.644511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.877 qpair failed and we were unable to recover it. 00:31:19.877 [2024-05-13 03:12:10.654304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.877 [2024-05-13 03:12:10.654495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.877 [2024-05-13 03:12:10.654521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.877 [2024-05-13 03:12:10.654535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.877 [2024-05-13 03:12:10.654547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.877 [2024-05-13 03:12:10.654577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.877 qpair failed and we were unable to recover it. 00:31:19.877 [2024-05-13 03:12:10.664292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.877 [2024-05-13 03:12:10.664478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.877 [2024-05-13 03:12:10.664504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.877 [2024-05-13 03:12:10.664519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.877 [2024-05-13 03:12:10.664531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.877 [2024-05-13 03:12:10.664560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.877 qpair failed and we were unable to recover it. 00:31:19.877 [2024-05-13 03:12:10.674385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.877 [2024-05-13 03:12:10.674569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.877 [2024-05-13 03:12:10.674595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.877 [2024-05-13 03:12:10.674609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.877 [2024-05-13 03:12:10.674621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:19.877 [2024-05-13 03:12:10.674664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.877 qpair failed and we were unable to recover it. 00:31:20.136 [2024-05-13 03:12:10.684368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.136 [2024-05-13 03:12:10.684560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.136 [2024-05-13 03:12:10.684586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.136 [2024-05-13 03:12:10.684600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.136 [2024-05-13 03:12:10.684613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.136 [2024-05-13 03:12:10.684642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.136 qpair failed and we were unable to recover it. 00:31:20.136 [2024-05-13 03:12:10.694394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.136 [2024-05-13 03:12:10.694592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.136 [2024-05-13 03:12:10.694618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.136 [2024-05-13 03:12:10.694633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.136 [2024-05-13 03:12:10.694645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.136 [2024-05-13 03:12:10.694675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.136 qpair failed and we were unable to recover it. 00:31:20.136 [2024-05-13 03:12:10.704415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.136 [2024-05-13 03:12:10.704636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.136 [2024-05-13 03:12:10.704662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.136 [2024-05-13 03:12:10.704676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.136 [2024-05-13 03:12:10.704688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.136 [2024-05-13 03:12:10.704727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.136 qpair failed and we were unable to recover it. 00:31:20.136 [2024-05-13 03:12:10.714437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.136 [2024-05-13 03:12:10.714632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.136 [2024-05-13 03:12:10.714658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.136 [2024-05-13 03:12:10.714672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.136 [2024-05-13 03:12:10.714685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.136 [2024-05-13 03:12:10.714727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.136 qpair failed and we were unable to recover it. 00:31:20.136 [2024-05-13 03:12:10.724488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.136 [2024-05-13 03:12:10.724688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.136 [2024-05-13 03:12:10.724727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.136 [2024-05-13 03:12:10.724743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.136 [2024-05-13 03:12:10.724755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.136 [2024-05-13 03:12:10.724785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.136 qpair failed and we were unable to recover it. 00:31:20.136 [2024-05-13 03:12:10.734533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.136 [2024-05-13 03:12:10.734734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.136 [2024-05-13 03:12:10.734760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.136 [2024-05-13 03:12:10.734774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.734787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.734816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.744514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.744714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.744740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.744754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.744767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.744796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.754628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.754826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.754852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.754867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.754879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.754908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.764622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.764836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.764871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.764886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.764898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.764935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.774624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.774827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.774854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.774869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.774881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.774910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.784725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.784961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.784987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.785002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.785014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.785044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.794656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.794853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.794879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.794894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.794906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.794936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.804704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.804959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.804986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.805001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.805017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.805048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.814745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.814933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.814965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.814980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.814993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.815037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.824807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.825031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.825059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.825077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.825089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.825133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.834766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.834983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.835010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.835024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.835036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.835066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.844865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.845071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.845098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.845115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.845127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.845157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.854867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.855059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.855085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.855100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.855117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.855148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.864853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.865040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.137 [2024-05-13 03:12:10.865066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.137 [2024-05-13 03:12:10.865080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.137 [2024-05-13 03:12:10.865092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.137 [2024-05-13 03:12:10.865122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.137 qpair failed and we were unable to recover it. 00:31:20.137 [2024-05-13 03:12:10.874905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.137 [2024-05-13 03:12:10.875139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.138 [2024-05-13 03:12:10.875166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.138 [2024-05-13 03:12:10.875181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.138 [2024-05-13 03:12:10.875197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.138 [2024-05-13 03:12:10.875238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.138 qpair failed and we were unable to recover it. 00:31:20.138 [2024-05-13 03:12:10.884936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.138 [2024-05-13 03:12:10.885135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.138 [2024-05-13 03:12:10.885161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.138 [2024-05-13 03:12:10.885176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.138 [2024-05-13 03:12:10.885188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.138 [2024-05-13 03:12:10.885218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.138 qpair failed and we were unable to recover it. 00:31:20.138 [2024-05-13 03:12:10.894943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.138 [2024-05-13 03:12:10.895180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.138 [2024-05-13 03:12:10.895205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.138 [2024-05-13 03:12:10.895219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.138 [2024-05-13 03:12:10.895232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.138 [2024-05-13 03:12:10.895262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.138 qpair failed and we were unable to recover it. 00:31:20.138 [2024-05-13 03:12:10.905070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.138 [2024-05-13 03:12:10.905275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.138 [2024-05-13 03:12:10.905302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.138 [2024-05-13 03:12:10.905317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.138 [2024-05-13 03:12:10.905329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.138 [2024-05-13 03:12:10.905359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.138 qpair failed and we were unable to recover it. 00:31:20.138 [2024-05-13 03:12:10.915007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.138 [2024-05-13 03:12:10.915208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.138 [2024-05-13 03:12:10.915234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.138 [2024-05-13 03:12:10.915248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.138 [2024-05-13 03:12:10.915260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.138 [2024-05-13 03:12:10.915290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.138 qpair failed and we were unable to recover it. 00:31:20.138 [2024-05-13 03:12:10.925081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.138 [2024-05-13 03:12:10.925276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.138 [2024-05-13 03:12:10.925301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.138 [2024-05-13 03:12:10.925321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.138 [2024-05-13 03:12:10.925333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.138 [2024-05-13 03:12:10.925363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.138 qpair failed and we were unable to recover it. 00:31:20.138 [2024-05-13 03:12:10.935080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.138 [2024-05-13 03:12:10.935298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.138 [2024-05-13 03:12:10.935324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.138 [2024-05-13 03:12:10.935338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.138 [2024-05-13 03:12:10.935350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.138 [2024-05-13 03:12:10.935379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.138 qpair failed and we were unable to recover it. 00:31:20.397 [2024-05-13 03:12:10.945132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.397 [2024-05-13 03:12:10.945371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.397 [2024-05-13 03:12:10.945408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.397 [2024-05-13 03:12:10.945428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.397 [2024-05-13 03:12:10.945441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.397 [2024-05-13 03:12:10.945471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-05-13 03:12:10.955133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.397 [2024-05-13 03:12:10.955325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.397 [2024-05-13 03:12:10.955351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.397 [2024-05-13 03:12:10.955365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.397 [2024-05-13 03:12:10.955378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.397 [2024-05-13 03:12:10.955407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-05-13 03:12:10.965163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.397 [2024-05-13 03:12:10.965367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.397 [2024-05-13 03:12:10.965393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.397 [2024-05-13 03:12:10.965407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.397 [2024-05-13 03:12:10.965419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.397 [2024-05-13 03:12:10.965448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-05-13 03:12:10.975155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.397 [2024-05-13 03:12:10.975366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.397 [2024-05-13 03:12:10.975393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.397 [2024-05-13 03:12:10.975407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.397 [2024-05-13 03:12:10.975419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.397 [2024-05-13 03:12:10.975448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-05-13 03:12:10.985314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.397 [2024-05-13 03:12:10.985514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.397 [2024-05-13 03:12:10.985540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.397 [2024-05-13 03:12:10.985554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.397 [2024-05-13 03:12:10.985566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.397 [2024-05-13 03:12:10.985595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-05-13 03:12:10.995212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.397 [2024-05-13 03:12:10.995398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.397 [2024-05-13 03:12:10.995424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.397 [2024-05-13 03:12:10.995438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.397 [2024-05-13 03:12:10.995451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:10.995480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.005277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.005516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.005543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.005557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.005569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.005599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.015279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.015471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.015498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.015512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.015524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.015554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.025313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.025501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.025527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.025542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.025554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.025583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.035432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.035614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.035640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.035660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.035673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.035710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.045378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.045611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.045638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.045652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.045664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.045693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.055433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.055626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.055652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.055667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.055679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.055715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.065524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.065732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.065759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.065773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.065785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.065816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.075448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.075631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.075657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.075671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.075683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.075719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.085536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.085747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.085773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.085788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.085800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.085830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.095526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.095719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.095746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.095760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.095772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.095801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.105563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.105764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.105790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.105805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.105817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.105847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.115588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.115774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.115800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.115815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.115826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.115855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.125622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.125874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.125904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.398 [2024-05-13 03:12:11.125919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.398 [2024-05-13 03:12:11.125932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.398 [2024-05-13 03:12:11.125962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-05-13 03:12:11.135624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.398 [2024-05-13 03:12:11.135821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.398 [2024-05-13 03:12:11.135847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.399 [2024-05-13 03:12:11.135861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.399 [2024-05-13 03:12:11.135873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.399 [2024-05-13 03:12:11.135902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-05-13 03:12:11.145673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.399 [2024-05-13 03:12:11.145896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.399 [2024-05-13 03:12:11.145922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.399 [2024-05-13 03:12:11.145936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.399 [2024-05-13 03:12:11.145948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.399 [2024-05-13 03:12:11.145978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-05-13 03:12:11.155729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.399 [2024-05-13 03:12:11.155916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.399 [2024-05-13 03:12:11.155942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.399 [2024-05-13 03:12:11.155956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.399 [2024-05-13 03:12:11.155969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.399 [2024-05-13 03:12:11.155998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-05-13 03:12:11.165738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.399 [2024-05-13 03:12:11.165930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.399 [2024-05-13 03:12:11.165956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.399 [2024-05-13 03:12:11.165971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.399 [2024-05-13 03:12:11.165983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.399 [2024-05-13 03:12:11.166031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-05-13 03:12:11.175749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.399 [2024-05-13 03:12:11.175942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.399 [2024-05-13 03:12:11.175967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.399 [2024-05-13 03:12:11.175981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.399 [2024-05-13 03:12:11.175993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.399 [2024-05-13 03:12:11.176023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-05-13 03:12:11.185795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.399 [2024-05-13 03:12:11.186026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.399 [2024-05-13 03:12:11.186052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.399 [2024-05-13 03:12:11.186067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.399 [2024-05-13 03:12:11.186079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.399 [2024-05-13 03:12:11.186108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-05-13 03:12:11.195793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.399 [2024-05-13 03:12:11.195980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.399 [2024-05-13 03:12:11.196006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.399 [2024-05-13 03:12:11.196021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.399 [2024-05-13 03:12:11.196033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.399 [2024-05-13 03:12:11.196063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.658 [2024-05-13 03:12:11.205840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.658 [2024-05-13 03:12:11.206028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.658 [2024-05-13 03:12:11.206054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.658 [2024-05-13 03:12:11.206069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.658 [2024-05-13 03:12:11.206081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.658 [2024-05-13 03:12:11.206110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.658 qpair failed and we were unable to recover it. 00:31:20.658 [2024-05-13 03:12:11.215861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.658 [2024-05-13 03:12:11.216051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.658 [2024-05-13 03:12:11.216082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.658 [2024-05-13 03:12:11.216097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.658 [2024-05-13 03:12:11.216109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.658 [2024-05-13 03:12:11.216139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.658 qpair failed and we were unable to recover it. 00:31:20.658 [2024-05-13 03:12:11.225926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.658 [2024-05-13 03:12:11.226134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.658 [2024-05-13 03:12:11.226161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.658 [2024-05-13 03:12:11.226176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.658 [2024-05-13 03:12:11.226190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.658 [2024-05-13 03:12:11.226231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.658 qpair failed and we were unable to recover it. 00:31:20.658 [2024-05-13 03:12:11.235921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.658 [2024-05-13 03:12:11.236107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.658 [2024-05-13 03:12:11.236134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.658 [2024-05-13 03:12:11.236148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.658 [2024-05-13 03:12:11.236160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.658 [2024-05-13 03:12:11.236189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.658 qpair failed and we were unable to recover it. 00:31:20.658 [2024-05-13 03:12:11.245988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.658 [2024-05-13 03:12:11.246220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.658 [2024-05-13 03:12:11.246246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.658 [2024-05-13 03:12:11.246260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.658 [2024-05-13 03:12:11.246272] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.658 [2024-05-13 03:12:11.246301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.658 qpair failed and we were unable to recover it. 00:31:20.658 [2024-05-13 03:12:11.255994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.658 [2024-05-13 03:12:11.256182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.658 [2024-05-13 03:12:11.256208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.658 [2024-05-13 03:12:11.256222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.658 [2024-05-13 03:12:11.256240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.658 [2024-05-13 03:12:11.256270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.658 qpair failed and we were unable to recover it. 00:31:20.658 [2024-05-13 03:12:11.266009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.658 [2024-05-13 03:12:11.266251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.658 [2024-05-13 03:12:11.266276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.658 [2024-05-13 03:12:11.266291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.658 [2024-05-13 03:12:11.266303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.658 [2024-05-13 03:12:11.266331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.658 qpair failed and we were unable to recover it. 00:31:20.658 [2024-05-13 03:12:11.276038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.658 [2024-05-13 03:12:11.276223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.276249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.276263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.276275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.276305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.286083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.286314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.286339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.286354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.286366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.286395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.296087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.296272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.296297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.296312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.296324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.296353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.306127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.306313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.306338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.306352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.306364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.306393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.316288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.316489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.316516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.316530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.316542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.316572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.326204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.326399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.326425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.326440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.326452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.326481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.336201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.336389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.336415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.336429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.336441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.336470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.346205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.346389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.346417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.346431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.346449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.346479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.356369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.356558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.356585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.356599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.356611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.356640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.366340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.366559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.366585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.366599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.366611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.366641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.376368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.376556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.376582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.376597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.376610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.376652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.386339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.386531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.386557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.386571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.386584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.386613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.396369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.396554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.396580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.396594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.396606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.396635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.659 qpair failed and we were unable to recover it. 00:31:20.659 [2024-05-13 03:12:11.406413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.659 [2024-05-13 03:12:11.406615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.659 [2024-05-13 03:12:11.406641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.659 [2024-05-13 03:12:11.406655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.659 [2024-05-13 03:12:11.406667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.659 [2024-05-13 03:12:11.406704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.660 qpair failed and we were unable to recover it. 00:31:20.660 [2024-05-13 03:12:11.416470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.660 [2024-05-13 03:12:11.416717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.660 [2024-05-13 03:12:11.416743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.660 [2024-05-13 03:12:11.416757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.660 [2024-05-13 03:12:11.416769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.660 [2024-05-13 03:12:11.416799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.660 qpair failed and we were unable to recover it. 00:31:20.660 [2024-05-13 03:12:11.426551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.660 [2024-05-13 03:12:11.426769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.660 [2024-05-13 03:12:11.426795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.660 [2024-05-13 03:12:11.426809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.660 [2024-05-13 03:12:11.426821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.660 [2024-05-13 03:12:11.426851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.660 qpair failed and we were unable to recover it. 00:31:20.660 [2024-05-13 03:12:11.436606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.660 [2024-05-13 03:12:11.436802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.660 [2024-05-13 03:12:11.436828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.660 [2024-05-13 03:12:11.436848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.660 [2024-05-13 03:12:11.436861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.660 [2024-05-13 03:12:11.436891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.660 qpair failed and we were unable to recover it. 00:31:20.660 [2024-05-13 03:12:11.446550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.660 [2024-05-13 03:12:11.446746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.660 [2024-05-13 03:12:11.446772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.660 [2024-05-13 03:12:11.446786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.660 [2024-05-13 03:12:11.446798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.660 [2024-05-13 03:12:11.446827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.660 qpair failed and we were unable to recover it. 00:31:20.660 [2024-05-13 03:12:11.456620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.660 [2024-05-13 03:12:11.456863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.660 [2024-05-13 03:12:11.456889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.660 [2024-05-13 03:12:11.456903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.660 [2024-05-13 03:12:11.456915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.660 [2024-05-13 03:12:11.456946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.660 qpair failed and we were unable to recover it. 00:31:20.919 [2024-05-13 03:12:11.466580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.919 [2024-05-13 03:12:11.466793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.919 [2024-05-13 03:12:11.466820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.919 [2024-05-13 03:12:11.466834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.919 [2024-05-13 03:12:11.466846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.919 [2024-05-13 03:12:11.466876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-05-13 03:12:11.476610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.919 [2024-05-13 03:12:11.476835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.919 [2024-05-13 03:12:11.476861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.919 [2024-05-13 03:12:11.476875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.919 [2024-05-13 03:12:11.476887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.919 [2024-05-13 03:12:11.476916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-05-13 03:12:11.486660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.919 [2024-05-13 03:12:11.486865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.919 [2024-05-13 03:12:11.486891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.919 [2024-05-13 03:12:11.486905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.919 [2024-05-13 03:12:11.486918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.919 [2024-05-13 03:12:11.486948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-05-13 03:12:11.496679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.919 [2024-05-13 03:12:11.496875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.919 [2024-05-13 03:12:11.496902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.919 [2024-05-13 03:12:11.496916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.919 [2024-05-13 03:12:11.496929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.919 [2024-05-13 03:12:11.496958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-05-13 03:12:11.506701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.919 [2024-05-13 03:12:11.506892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.919 [2024-05-13 03:12:11.506918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.919 [2024-05-13 03:12:11.506932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.919 [2024-05-13 03:12:11.506944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.919 [2024-05-13 03:12:11.506974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-05-13 03:12:11.516719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.919 [2024-05-13 03:12:11.516957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.919 [2024-05-13 03:12:11.516983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.919 [2024-05-13 03:12:11.516997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.919 [2024-05-13 03:12:11.517010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.919 [2024-05-13 03:12:11.517039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.526760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.526967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.526998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.527014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.527026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.527056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.536785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.536983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.537009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.537023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.537035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.537065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.546803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.546995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.547021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.547035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.547047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.547077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.556828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.557014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.557040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.557054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.557067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.557096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.566948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.567137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.567163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.567177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.567190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.567225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.576898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.577090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.577116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.577134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.577146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.577176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.586915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.587106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.587132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.587146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.587159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.587188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.597004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.597243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.597270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.597288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.597301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.597331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.607004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.607202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.607229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.607243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.607255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.607285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.617017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.617211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.617243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.617258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.617271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.617300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.627044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.627229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.627255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.627270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.627282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.627311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.637136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.637324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.637350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.637364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.637376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.637406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.647159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.647360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.647387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.647402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.647416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.920 [2024-05-13 03:12:11.647446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-05-13 03:12:11.657115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.920 [2024-05-13 03:12:11.657305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.920 [2024-05-13 03:12:11.657331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.920 [2024-05-13 03:12:11.657346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.920 [2024-05-13 03:12:11.657363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.921 [2024-05-13 03:12:11.657393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-05-13 03:12:11.667157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.921 [2024-05-13 03:12:11.667378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.921 [2024-05-13 03:12:11.667404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.921 [2024-05-13 03:12:11.667419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.921 [2024-05-13 03:12:11.667431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.921 [2024-05-13 03:12:11.667460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-05-13 03:12:11.677232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.921 [2024-05-13 03:12:11.677462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.921 [2024-05-13 03:12:11.677488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.921 [2024-05-13 03:12:11.677503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.921 [2024-05-13 03:12:11.677515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.921 [2024-05-13 03:12:11.677558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-05-13 03:12:11.687193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.921 [2024-05-13 03:12:11.687394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.921 [2024-05-13 03:12:11.687421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.921 [2024-05-13 03:12:11.687435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.921 [2024-05-13 03:12:11.687447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.921 [2024-05-13 03:12:11.687476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-05-13 03:12:11.697204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.921 [2024-05-13 03:12:11.697393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.921 [2024-05-13 03:12:11.697418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.921 [2024-05-13 03:12:11.697432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.921 [2024-05-13 03:12:11.697445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.921 [2024-05-13 03:12:11.697474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-05-13 03:12:11.707278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.921 [2024-05-13 03:12:11.707468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.921 [2024-05-13 03:12:11.707494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.921 [2024-05-13 03:12:11.707508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.921 [2024-05-13 03:12:11.707520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.921 [2024-05-13 03:12:11.707563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-05-13 03:12:11.717307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.921 [2024-05-13 03:12:11.717529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.921 [2024-05-13 03:12:11.717555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.921 [2024-05-13 03:12:11.717570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.921 [2024-05-13 03:12:11.717582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:20.921 [2024-05-13 03:12:11.717612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.921 qpair failed and we were unable to recover it. 00:31:21.180 [2024-05-13 03:12:11.727336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.180 [2024-05-13 03:12:11.727556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.180 [2024-05-13 03:12:11.727583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.727597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.727610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.727639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.737352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.737569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.737595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.737609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.737622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.737651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.747381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.747566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.747593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.747608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.747628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.747658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.757383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.757565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.757591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.757605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.757617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.757646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.767419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.767608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.767634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.767649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.767661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.767690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.777442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.777627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.777653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.777667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.777680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.777719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.787479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.787687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.787722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.787737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.787749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.787779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.797563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.797760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.797787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.797801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.797813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.797843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.807552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.807757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.807783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.807797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.807809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.807839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.817585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.817782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.817809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.817823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.817836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.817865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.827590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.827787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.827813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.827827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.827840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.827869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.837625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.837815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.837840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.837860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.837873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.837903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.847658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.847859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.847886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.847900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.847912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.847942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.857704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.181 [2024-05-13 03:12:11.857899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.181 [2024-05-13 03:12:11.857925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.181 [2024-05-13 03:12:11.857939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.181 [2024-05-13 03:12:11.857951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.181 [2024-05-13 03:12:11.857981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.181 qpair failed and we were unable to recover it. 00:31:21.181 [2024-05-13 03:12:11.867803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.867999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.868025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.868043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.868056] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.868086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.877801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.878026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.878053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.878067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.878079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.878109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.887888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.888084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.888110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.888125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.888137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.888166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.897830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.898057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.898082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.898095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.898108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.898149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.907887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.908075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.908100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.908114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.908126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.908168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.917902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.918129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.918155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.918169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.918181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.918211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.927968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.928158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.928190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.928205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.928217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.928246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.937902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.938105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.938131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.938146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.938158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.938187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.947951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.948140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.948166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.948181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.948193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.948223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.957991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.958182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.958208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.958222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.958235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.958264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.968094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.968281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.968307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.968321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.968333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.968369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.182 [2024-05-13 03:12:11.978054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.182 [2024-05-13 03:12:11.978237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.182 [2024-05-13 03:12:11.978263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.182 [2024-05-13 03:12:11.978277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.182 [2024-05-13 03:12:11.978290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.182 [2024-05-13 03:12:11.978319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.182 qpair failed and we were unable to recover it. 00:31:21.442 [2024-05-13 03:12:11.988051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.442 [2024-05-13 03:12:11.988237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.442 [2024-05-13 03:12:11.988263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.442 [2024-05-13 03:12:11.988277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.442 [2024-05-13 03:12:11.988289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.442 [2024-05-13 03:12:11.988318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.442 qpair failed and we were unable to recover it. 00:31:21.442 [2024-05-13 03:12:11.998050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.442 [2024-05-13 03:12:11.998229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.442 [2024-05-13 03:12:11.998256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.442 [2024-05-13 03:12:11.998270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.442 [2024-05-13 03:12:11.998282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.442 [2024-05-13 03:12:11.998311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.442 qpair failed and we were unable to recover it. 00:31:21.442 [2024-05-13 03:12:12.008171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.442 [2024-05-13 03:12:12.008388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.442 [2024-05-13 03:12:12.008415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.442 [2024-05-13 03:12:12.008429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.442 [2024-05-13 03:12:12.008441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.442 [2024-05-13 03:12:12.008471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.442 qpair failed and we were unable to recover it. 00:31:21.442 [2024-05-13 03:12:12.018147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.442 [2024-05-13 03:12:12.018335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.442 [2024-05-13 03:12:12.018366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.442 [2024-05-13 03:12:12.018381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.442 [2024-05-13 03:12:12.018394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.442 [2024-05-13 03:12:12.018423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.442 qpair failed and we were unable to recover it. 00:31:21.442 [2024-05-13 03:12:12.028196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.442 [2024-05-13 03:12:12.028451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.442 [2024-05-13 03:12:12.028478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.442 [2024-05-13 03:12:12.028494] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.442 [2024-05-13 03:12:12.028507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.442 [2024-05-13 03:12:12.028536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.442 qpair failed and we were unable to recover it. 00:31:21.442 [2024-05-13 03:12:12.038319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.442 [2024-05-13 03:12:12.038564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.442 [2024-05-13 03:12:12.038593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.442 [2024-05-13 03:12:12.038612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.442 [2024-05-13 03:12:12.038625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.442 [2024-05-13 03:12:12.038669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.442 qpair failed and we were unable to recover it. 00:31:21.442 [2024-05-13 03:12:12.048239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.442 [2024-05-13 03:12:12.048472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.442 [2024-05-13 03:12:12.048500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.442 [2024-05-13 03:12:12.048515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.442 [2024-05-13 03:12:12.048528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.442 [2024-05-13 03:12:12.048559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.442 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.058240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.058431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.058458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.058472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.058485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.443 [2024-05-13 03:12:12.058520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.068281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.068471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.068497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.068512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.068524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.443 [2024-05-13 03:12:12.068553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.078306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.078502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.078528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.078542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.078555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.443 [2024-05-13 03:12:12.078584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.088383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.088616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.088642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.088656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.088669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.443 [2024-05-13 03:12:12.088707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.098448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.098639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.098666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.098680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.098693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.443 [2024-05-13 03:12:12.098732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.108396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.108590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.108616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.108631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.108643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.443 [2024-05-13 03:12:12.108672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.118414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.118600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.118626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.118641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.118653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.443 [2024-05-13 03:12:12.118682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.128469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.128667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.128693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.128717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.128729] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.443 [2024-05-13 03:12:12.128759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.138508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.138708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.138735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.138749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.138761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0044000b90 00:31:21.443 [2024-05-13 03:12:12.138791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.148514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.148734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.148767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.148784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.148803] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f004c000b90 00:31:21.443 [2024-05-13 03:12:12.148834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.158584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.158803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.158831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.158845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.158858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f004c000b90 00:31:21.443 [2024-05-13 03:12:12.158888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.168559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.168752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.168790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.168805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.168817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:21.443 [2024-05-13 03:12:12.168847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.178568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.178763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.178789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.178804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.443 [2024-05-13 03:12:12.178816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2be50 00:31:21.443 [2024-05-13 03:12:12.178845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:21.443 qpair failed and we were unable to recover it. 00:31:21.443 [2024-05-13 03:12:12.178979] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:31:21.443 A controller has encountered a failure and is being reset. 00:31:21.443 [2024-05-13 03:12:12.188686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.443 [2024-05-13 03:12:12.188900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.443 [2024-05-13 03:12:12.188933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.443 [2024-05-13 03:12:12.188949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.444 [2024-05-13 03:12:12.188962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f003c000b90 00:31:21.444 [2024-05-13 03:12:12.188994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:21.444 qpair failed and we were unable to recover it. 00:31:21.444 [2024-05-13 03:12:12.198731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.444 [2024-05-13 03:12:12.198926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.444 [2024-05-13 03:12:12.198955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.444 [2024-05-13 03:12:12.198970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.444 [2024-05-13 03:12:12.198983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f003c000b90 00:31:21.444 [2024-05-13 03:12:12.199014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:21.444 qpair failed and we were unable to recover it. 00:31:21.444 [2024-05-13 03:12:12.199123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf39970 (9): Bad file descriptor 00:31:21.702 Controller properly reset. 00:31:21.702 Initializing NVMe Controllers 00:31:21.702 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:21.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:21.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:21.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:21.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:21.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:21.702 Initialization complete. Launching workers. 00:31:21.702 Starting thread on core 1 00:31:21.702 Starting thread on core 2 00:31:21.702 Starting thread on core 3 00:31:21.702 Starting thread on core 0 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:31:21.702 00:31:21.702 real 0m10.879s 00:31:21.702 user 0m16.677s 00:31:21.702 sys 0m5.727s 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:21.702 ************************************ 00:31:21.702 END TEST nvmf_target_disconnect_tc2 00:31:21.702 ************************************ 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:21.702 rmmod nvme_tcp 00:31:21.702 rmmod nvme_fabrics 00:31:21.702 rmmod nvme_keyring 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 486455 ']' 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 486455 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 486455 ']' 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 486455 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 486455 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 486455' 00:31:21.702 killing process with pid 486455 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 486455 00:31:21.702 [2024-05-13 03:12:12.372957] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:21.702 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 486455 00:31:21.961 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:21.961 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:21.961 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:21.961 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:21.961 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:21.961 03:12:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.961 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.961 03:12:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.864 03:12:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:23.864 00:31:23.864 real 0m15.573s 00:31:23.864 user 0m43.150s 00:31:23.864 sys 0m7.644s 00:31:23.864 03:12:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:23.864 03:12:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:23.864 ************************************ 00:31:23.864 END TEST nvmf_target_disconnect 00:31:23.864 ************************************ 00:31:24.122 03:12:14 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:31:24.122 03:12:14 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.122 03:12:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.122 03:12:14 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:31:24.122 00:31:24.122 real 23m39.310s 00:31:24.122 user 65m8.263s 00:31:24.122 sys 5m58.267s 00:31:24.122 03:12:14 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:24.122 03:12:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.122 ************************************ 00:31:24.122 END TEST nvmf_tcp 00:31:24.122 ************************************ 00:31:24.122 03:12:14 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:31:24.122 03:12:14 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:24.122 03:12:14 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:24.122 03:12:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:24.122 03:12:14 -- common/autotest_common.sh@10 -- # set +x 00:31:24.122 ************************************ 00:31:24.122 START TEST spdkcli_nvmf_tcp 00:31:24.122 ************************************ 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:24.122 * Looking for test storage... 00:31:24.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=488157 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 488157 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 488157 ']' 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:24.122 03:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.122 [2024-05-13 03:12:14.873063] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:31:24.122 [2024-05-13 03:12:14.873134] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488157 ] 00:31:24.122 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.122 [2024-05-13 03:12:14.904809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:24.380 [2024-05-13 03:12:14.931592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:24.380 [2024-05-13 03:12:15.017212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.380 [2024-05-13 03:12:15.017217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.380 03:12:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:24.380 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:24.380 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:24.380 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:24.380 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:24.380 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:24.380 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:24.380 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:24.380 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:24.380 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:24.380 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:24.380 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:24.380 ' 00:31:26.908 [2024-05-13 03:12:17.700384] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.279 [2024-05-13 03:12:18.944295] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:28.279 [2024-05-13 03:12:18.944860] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:30.806 [2024-05-13 03:12:21.256133] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:32.737 [2024-05-13 03:12:23.230389] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:34.108 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:34.108 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:34.108 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:34.108 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:34.108 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:34.108 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:34.108 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:34.108 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:34.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:34.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:34.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:34.108 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:34.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:34.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:34.108 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:34.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:34.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:34.109 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:34.109 03:12:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:34.109 03:12:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.109 03:12:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:34.109 03:12:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:34.109 03:12:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:34.109 03:12:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:34.109 03:12:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:34.109 03:12:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:34.674 03:12:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:34.674 03:12:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:34.674 03:12:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:34.674 03:12:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.674 03:12:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:34.674 03:12:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:34.674 03:12:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:34.674 03:12:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:34.674 03:12:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:34.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:34.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:34.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:34.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:34.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:34.674 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:34.674 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:34.674 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:34.674 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:34.674 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:34.674 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:34.674 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:34.674 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:34.674 ' 00:31:39.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:39.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:39.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:39.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:39.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:39.937 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:39.937 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:39.937 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:39.937 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:39.937 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:39.937 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:39.937 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:39.937 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:39.937 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 488157 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 488157 ']' 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 488157 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 488157 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 488157' 00:31:39.937 killing process with pid 488157 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 488157 00:31:39.937 [2024-05-13 03:12:30.689814] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:39.937 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 488157 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 488157 ']' 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 488157 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 488157 ']' 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 488157 00:31:40.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (488157) - No such process 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 488157 is not found' 00:31:40.195 Process with pid 488157 is not found 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:40.195 03:12:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:40.196 00:31:40.196 real 0m16.163s 00:31:40.196 user 0m34.297s 00:31:40.196 sys 0m0.833s 00:31:40.196 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:40.196 03:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.196 ************************************ 00:31:40.196 END TEST spdkcli_nvmf_tcp 00:31:40.196 ************************************ 00:31:40.196 03:12:30 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:40.196 03:12:30 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:40.196 03:12:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:40.196 03:12:30 -- common/autotest_common.sh@10 -- # set +x 00:31:40.196 ************************************ 00:31:40.196 START TEST nvmf_identify_passthru 00:31:40.196 ************************************ 00:31:40.196 03:12:30 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:40.455 * Looking for test storage... 00:31:40.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.455 03:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.455 03:12:31 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.455 03:12:31 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.455 03:12:31 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:40.455 03:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.455 03:12:31 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.455 03:12:31 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.455 03:12:31 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:40.455 03:12:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 03:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.455 03:12:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:40.455 03:12:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:40.455 03:12:31 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:40.455 03:12:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:42.360 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:42.360 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:42.360 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:42.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:42.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:42.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:31:42.361 00:31:42.361 --- 10.0.0.2 ping statistics --- 00:31:42.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.361 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:42.361 03:12:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:31:42.361 00:31:42.361 --- 10.0.0.1 ping statistics --- 00:31:42.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.361 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:31:42.361 03:12:33 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.361 03:12:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:42.361 03:12:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:42.361 03:12:33 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.361 03:12:33 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:42.361 03:12:33 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:42.361 03:12:33 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.361 03:12:33 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:42.361 03:12:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:42.361 03:12:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:42.361 03:12:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:31:42.361 03:12:33 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:31:42.361 03:12:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:31:42.361 03:12:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:31:42.361 03:12:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:42.361 03:12:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:42.361 03:12:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:42.361 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.542 03:12:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:31:46.542 03:12:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:46.542 03:12:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:46.542 03:12:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:46.542 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.727 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:50.727 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:50.727 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.727 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.727 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:50.727 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:50.727 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.727 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=492649 00:31:50.727 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:50.727 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:50.727 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 492649 00:31:50.727 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 492649 ']' 00:31:50.727 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.727 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:50.727 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.728 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:50.728 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.988 [2024-05-13 03:12:41.543674] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:31:50.988 [2024-05-13 03:12:41.543789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.988 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.988 [2024-05-13 03:12:41.581554] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:50.988 [2024-05-13 03:12:41.613861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:50.988 [2024-05-13 03:12:41.705806] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.988 [2024-05-13 03:12:41.705865] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.988 [2024-05-13 03:12:41.705881] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.988 [2024-05-13 03:12:41.705894] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.988 [2024-05-13 03:12:41.705906] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.988 [2024-05-13 03:12:41.705968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.988 [2024-05-13 03:12:41.706028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.988 [2024-05-13 03:12:41.706145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:50.988 [2024-05-13 03:12:41.706147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.988 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:50.988 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:31:50.988 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:50.988 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.988 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.988 INFO: Log level set to 20 00:31:50.988 INFO: Requests: 00:31:50.988 { 00:31:50.988 "jsonrpc": "2.0", 00:31:50.988 "method": "nvmf_set_config", 00:31:50.988 "id": 1, 00:31:50.988 "params": { 00:31:50.988 "admin_cmd_passthru": { 00:31:50.988 "identify_ctrlr": true 00:31:50.988 } 00:31:50.988 } 00:31:50.988 } 00:31:50.988 00:31:50.988 INFO: response: 00:31:50.988 { 00:31:50.988 "jsonrpc": "2.0", 00:31:50.988 "id": 1, 00:31:50.988 "result": true 00:31:50.988 } 00:31:50.988 00:31:50.988 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.988 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:50.988 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.988 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.988 INFO: Setting log level to 20 00:31:50.988 INFO: Setting log level to 20 00:31:50.988 INFO: Log level set to 20 00:31:50.988 INFO: Log level set to 20 00:31:50.988 INFO: Requests: 00:31:50.988 { 00:31:50.988 "jsonrpc": "2.0", 00:31:50.988 "method": "framework_start_init", 00:31:50.988 "id": 1 00:31:50.988 } 00:31:50.988 00:31:50.988 INFO: Requests: 00:31:50.988 { 00:31:50.988 "jsonrpc": "2.0", 00:31:50.988 "method": "framework_start_init", 00:31:50.988 "id": 1 00:31:50.988 } 00:31:50.988 00:31:51.247 [2024-05-13 03:12:41.858966] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:51.247 INFO: response: 00:31:51.247 { 00:31:51.247 "jsonrpc": "2.0", 00:31:51.247 "id": 1, 00:31:51.247 "result": true 00:31:51.247 } 00:31:51.247 00:31:51.247 INFO: response: 00:31:51.247 { 00:31:51.247 "jsonrpc": "2.0", 00:31:51.247 "id": 1, 00:31:51.247 "result": true 00:31:51.247 } 00:31:51.247 00:31:51.247 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.247 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:51.247 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.247 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:51.247 INFO: Setting log level to 40 00:31:51.247 INFO: Setting log level to 40 00:31:51.247 INFO: Setting log level to 40 00:31:51.247 [2024-05-13 03:12:41.868938] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.247 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.247 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:51.247 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:51.247 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:51.247 03:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:31:51.247 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.247 03:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:54.600 Nvme0n1 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:54.600 [2024-05-13 03:12:44.752288] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:54.600 [2024-05-13 03:12:44.752580] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:54.600 [ 00:31:54.600 { 00:31:54.600 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:54.600 "subtype": "Discovery", 00:31:54.600 "listen_addresses": [], 00:31:54.600 "allow_any_host": true, 00:31:54.600 "hosts": [] 00:31:54.600 }, 00:31:54.600 { 00:31:54.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:54.600 "subtype": "NVMe", 00:31:54.600 "listen_addresses": [ 00:31:54.600 { 00:31:54.600 "trtype": "TCP", 00:31:54.600 "adrfam": "IPv4", 00:31:54.600 "traddr": "10.0.0.2", 00:31:54.600 "trsvcid": "4420" 00:31:54.600 } 00:31:54.600 ], 00:31:54.600 "allow_any_host": true, 00:31:54.600 "hosts": [], 00:31:54.600 "serial_number": "SPDK00000000000001", 00:31:54.600 "model_number": "SPDK bdev Controller", 00:31:54.600 "max_namespaces": 1, 00:31:54.600 "min_cntlid": 1, 00:31:54.600 "max_cntlid": 65519, 00:31:54.600 "namespaces": [ 00:31:54.600 { 00:31:54.600 "nsid": 1, 00:31:54.600 "bdev_name": "Nvme0n1", 00:31:54.600 "name": "Nvme0n1", 00:31:54.600 "nguid": "0DB3750D9AE3426FBD5AE5262B8E49FA", 00:31:54.600 "uuid": "0db3750d-9ae3-426f-bd5a-e5262b8e49fa" 00:31:54.600 } 00:31:54.600 ] 00:31:54.600 } 00:31:54.600 ] 00:31:54.600 03:12:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:54.600 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:54.600 03:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:54.600 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.600 03:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:54.600 03:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:31:54.600 03:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:54.600 03:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.600 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.600 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:54.600 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.600 03:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:54.600 03:12:45 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:54.600 rmmod nvme_tcp 00:31:54.600 rmmod nvme_fabrics 00:31:54.600 rmmod nvme_keyring 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 492649 ']' 00:31:54.600 03:12:45 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 492649 00:31:54.600 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 492649 ']' 00:31:54.600 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 492649 00:31:54.600 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:31:54.601 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:54.601 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 492649 00:31:54.601 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:54.601 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:54.601 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 492649' 00:31:54.601 killing process with pid 492649 00:31:54.601 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 492649 00:31:54.601 [2024-05-13 03:12:45.191938] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:54.601 03:12:45 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 492649 00:31:55.974 03:12:46 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:55.974 03:12:46 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:55.974 03:12:46 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:55.974 03:12:46 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:55.974 03:12:46 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:55.974 03:12:46 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.974 03:12:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.974 03:12:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.507 03:12:48 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:58.507 00:31:58.507 real 0m17.803s 00:31:58.507 user 0m26.482s 00:31:58.507 sys 0m2.240s 00:31:58.507 03:12:48 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:58.507 03:12:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:58.507 ************************************ 00:31:58.507 END TEST nvmf_identify_passthru 00:31:58.507 ************************************ 00:31:58.507 03:12:48 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:58.507 03:12:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:58.507 03:12:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:58.507 03:12:48 -- common/autotest_common.sh@10 -- # set +x 00:31:58.507 ************************************ 00:31:58.507 START TEST nvmf_dif 00:31:58.507 ************************************ 00:31:58.507 03:12:48 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:58.507 * Looking for test storage... 00:31:58.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.507 03:12:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.507 03:12:48 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.507 03:12:48 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.507 03:12:48 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.507 03:12:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.507 03:12:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.507 03:12:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.507 03:12:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:58.507 03:12:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:58.507 03:12:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:58.507 03:12:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:58.507 03:12:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:58.507 03:12:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:58.507 03:12:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.507 03:12:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:58.507 03:12:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:58.507 03:12:48 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:58.507 03:12:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:00.412 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:00.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:00.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:00.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:00.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:32:00.412 00:32:00.412 --- 10.0.0.2 ping statistics --- 00:32:00.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.412 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:32:00.412 00:32:00.412 --- 10.0.0.1 ping statistics --- 00:32:00.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.412 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:00.412 03:12:50 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:01.350 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:01.350 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:01.350 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:01.350 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:01.350 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:01.350 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:01.350 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:01.350 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:01.350 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:01.350 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:01.350 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:01.350 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:01.350 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:01.350 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:01.350 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:01.350 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:01.350 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:01.350 03:12:52 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.351 03:12:52 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:01.351 03:12:52 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:01.351 03:12:52 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.351 03:12:52 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:01.351 03:12:52 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:01.351 03:12:52 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:01.351 03:12:52 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:01.351 03:12:52 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:01.351 03:12:52 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:01.351 03:12:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.351 03:12:52 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=495908 00:32:01.351 03:12:52 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:01.351 03:12:52 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 495908 00:32:01.351 03:12:52 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 495908 ']' 00:32:01.351 03:12:52 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.351 03:12:52 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:01.351 03:12:52 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.351 03:12:52 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:01.351 03:12:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.351 [2024-05-13 03:12:52.148958] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:32:01.351 [2024-05-13 03:12:52.149059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.610 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.610 [2024-05-13 03:12:52.187403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:01.610 [2024-05-13 03:12:52.219577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.610 [2024-05-13 03:12:52.308112] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.610 [2024-05-13 03:12:52.308168] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.610 [2024-05-13 03:12:52.308188] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.610 [2024-05-13 03:12:52.308200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.610 [2024-05-13 03:12:52.308209] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.610 [2024-05-13 03:12:52.308244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:32:01.869 03:12:52 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.869 03:12:52 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.869 03:12:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:01.869 03:12:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.869 [2024-05-13 03:12:52.450274] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.869 03:12:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:01.869 03:12:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.869 ************************************ 00:32:01.869 START TEST fio_dif_1_default 00:32:01.869 ************************************ 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:01.869 bdev_null0 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:01.869 [2024-05-13 03:12:52.518381] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:01.869 [2024-05-13 03:12:52.518640] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:01.869 { 00:32:01.869 "params": { 00:32:01.869 "name": "Nvme$subsystem", 00:32:01.869 "trtype": "$TEST_TRANSPORT", 00:32:01.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:01.869 "adrfam": "ipv4", 00:32:01.869 "trsvcid": "$NVMF_PORT", 00:32:01.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:01.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:01.869 "hdgst": ${hdgst:-false}, 00:32:01.869 "ddgst": ${ddgst:-false} 00:32:01.869 }, 00:32:01.869 "method": "bdev_nvme_attach_controller" 00:32:01.869 } 00:32:01.869 EOF 00:32:01.869 )") 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:01.869 "params": { 00:32:01.869 "name": "Nvme0", 00:32:01.869 "trtype": "tcp", 00:32:01.869 "traddr": "10.0.0.2", 00:32:01.869 "adrfam": "ipv4", 00:32:01.869 "trsvcid": "4420", 00:32:01.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:01.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:01.869 "hdgst": false, 00:32:01.869 "ddgst": false 00:32:01.869 }, 00:32:01.869 "method": "bdev_nvme_attach_controller" 00:32:01.869 }' 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:01.869 03:12:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:02.128 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:02.128 fio-3.35 00:32:02.128 Starting 1 thread 00:32:02.128 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.324 00:32:14.324 filename0: (groupid=0, jobs=1): err= 0: pid=496131: Mon May 13 03:13:03 2024 00:32:14.324 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10005msec) 00:32:14.324 slat (nsec): min=4835, max=54750, avg=9410.09, stdev=2885.44 00:32:14.324 clat (usec): min=41798, max=47538, avg=42006.67, stdev=377.78 00:32:14.324 lat (usec): min=41806, max=47554, avg=42016.08, stdev=377.70 00:32:14.324 clat percentiles (usec): 00:32:14.324 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:32:14.324 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:14.324 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:14.324 | 99.00th=[43254], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:32:14.324 | 99.99th=[47449] 00:32:14.324 bw ( KiB/s): min= 352, max= 384, per=99.58%, avg=379.20, stdev=11.72, samples=20 00:32:14.325 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:32:14.325 lat (msec) : 50=100.00% 00:32:14.325 cpu : usr=89.46%, sys=10.27%, ctx=14, majf=0, minf=230 00:32:14.325 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.325 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.325 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:14.325 00:32:14.325 Run status group 0 (all jobs): 00:32:14.325 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10005-10005msec 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 00:32:14.325 real 0m11.089s 00:32:14.325 user 0m9.959s 00:32:14.325 sys 0m1.307s 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 ************************************ 00:32:14.325 END TEST fio_dif_1_default 00:32:14.325 ************************************ 00:32:14.325 03:13:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:14.325 03:13:03 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:14.325 03:13:03 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 ************************************ 00:32:14.325 START TEST fio_dif_1_multi_subsystems 00:32:14.325 ************************************ 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 bdev_null0 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 [2024-05-13 03:13:03.653019] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 bdev_null1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:14.325 { 00:32:14.325 "params": { 00:32:14.325 "name": "Nvme$subsystem", 00:32:14.325 "trtype": "$TEST_TRANSPORT", 00:32:14.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.325 "adrfam": "ipv4", 00:32:14.325 "trsvcid": "$NVMF_PORT", 00:32:14.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.325 "hdgst": ${hdgst:-false}, 00:32:14.325 "ddgst": ${ddgst:-false} 00:32:14.325 }, 00:32:14.325 "method": "bdev_nvme_attach_controller" 00:32:14.325 } 00:32:14.325 EOF 00:32:14.325 )") 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:14.325 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:14.325 { 00:32:14.325 "params": { 00:32:14.325 "name": "Nvme$subsystem", 00:32:14.325 "trtype": "$TEST_TRANSPORT", 00:32:14.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.325 "adrfam": "ipv4", 00:32:14.325 "trsvcid": "$NVMF_PORT", 00:32:14.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.325 "hdgst": ${hdgst:-false}, 00:32:14.325 "ddgst": ${ddgst:-false} 00:32:14.325 }, 00:32:14.325 "method": "bdev_nvme_attach_controller" 00:32:14.325 } 00:32:14.325 EOF 00:32:14.325 )") 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:14.326 "params": { 00:32:14.326 "name": "Nvme0", 00:32:14.326 "trtype": "tcp", 00:32:14.326 "traddr": "10.0.0.2", 00:32:14.326 "adrfam": "ipv4", 00:32:14.326 "trsvcid": "4420", 00:32:14.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.326 "hdgst": false, 00:32:14.326 "ddgst": false 00:32:14.326 }, 00:32:14.326 "method": "bdev_nvme_attach_controller" 00:32:14.326 },{ 00:32:14.326 "params": { 00:32:14.326 "name": "Nvme1", 00:32:14.326 "trtype": "tcp", 00:32:14.326 "traddr": "10.0.0.2", 00:32:14.326 "adrfam": "ipv4", 00:32:14.326 "trsvcid": "4420", 00:32:14.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:14.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:14.326 "hdgst": false, 00:32:14.326 "ddgst": false 00:32:14.326 }, 00:32:14.326 "method": "bdev_nvme_attach_controller" 00:32:14.326 }' 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:14.326 03:13:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.326 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:14.326 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:14.326 fio-3.35 00:32:14.326 Starting 2 threads 00:32:14.326 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.296 00:32:24.296 filename0: (groupid=0, jobs=1): err= 0: pid=497426: Mon May 13 03:13:14 2024 00:32:24.296 read: IOPS=184, BW=740KiB/s (757kB/s)(7408KiB/10017msec) 00:32:24.296 slat (nsec): min=6955, max=56730, avg=9685.87, stdev=4838.51 00:32:24.296 clat (usec): min=1025, max=42857, avg=21604.17, stdev=20292.02 00:32:24.296 lat (usec): min=1032, max=42885, avg=21613.86, stdev=20290.90 00:32:24.296 clat percentiles (usec): 00:32:24.296 | 1.00th=[ 1057], 5.00th=[ 1090], 10.00th=[ 1106], 20.00th=[ 1123], 00:32:24.296 | 30.00th=[ 1139], 40.00th=[ 1205], 50.00th=[41157], 60.00th=[41681], 00:32:24.296 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:32:24.296 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:24.296 | 99.99th=[42730] 00:32:24.296 bw ( KiB/s): min= 704, max= 768, per=66.10%, avg=739.20, stdev=32.67, samples=20 00:32:24.296 iops : min= 176, max= 192, avg=184.80, stdev= 8.17, samples=20 00:32:24.296 lat (msec) : 2=49.68%, 50=50.32% 00:32:24.296 cpu : usr=94.61%, sys=5.09%, ctx=15, majf=0, minf=193 00:32:24.296 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.296 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.296 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:24.296 filename1: (groupid=0, jobs=1): err= 0: pid=497427: Mon May 13 03:13:14 2024 00:32:24.296 read: IOPS=94, BW=379KiB/s (388kB/s)(3792KiB/10018msec) 00:32:24.296 slat (nsec): min=6971, max=44675, avg=9626.14, stdev=4521.49 00:32:24.296 clat (usec): min=41010, max=43163, avg=42235.90, stdev=456.84 00:32:24.296 lat (usec): min=41032, max=43177, avg=42245.53, stdev=456.78 00:32:24.296 clat percentiles (usec): 00:32:24.296 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:32:24.296 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:24.296 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:32:24.296 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:32:24.296 | 99.99th=[43254] 00:32:24.296 bw ( KiB/s): min= 352, max= 384, per=33.72%, avg=377.60, stdev=13.13, samples=20 00:32:24.296 iops : min= 88, max= 96, avg=94.40, stdev= 3.28, samples=20 00:32:24.296 lat (msec) : 50=100.00% 00:32:24.296 cpu : usr=94.18%, sys=5.52%, ctx=13, majf=0, minf=43 00:32:24.296 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.296 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.296 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:24.296 00:32:24.296 Run status group 0 (all jobs): 00:32:24.296 READ: bw=1118KiB/s (1145kB/s), 379KiB/s-740KiB/s (388kB/s-757kB/s), io=10.9MiB (11.5MB), run=10017-10018msec 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.296 03:13:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 03:13:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.296 00:32:24.296 real 0m11.377s 00:32:24.296 user 0m20.285s 00:32:24.296 sys 0m1.380s 00:32:24.296 03:13:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:24.296 03:13:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 ************************************ 00:32:24.296 END TEST fio_dif_1_multi_subsystems 00:32:24.296 ************************************ 00:32:24.296 03:13:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:24.296 03:13:15 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:24.296 03:13:15 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:24.296 03:13:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 ************************************ 00:32:24.296 START TEST fio_dif_rand_params 00:32:24.296 ************************************ 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 bdev_null0 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:24.296 [2024-05-13 03:13:15.089177] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:24.296 03:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:24.296 { 00:32:24.296 "params": { 00:32:24.296 "name": "Nvme$subsystem", 00:32:24.296 "trtype": "$TEST_TRANSPORT", 00:32:24.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:24.296 "adrfam": "ipv4", 00:32:24.296 "trsvcid": "$NVMF_PORT", 00:32:24.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:24.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:24.297 "hdgst": ${hdgst:-false}, 00:32:24.297 "ddgst": ${ddgst:-false} 00:32:24.297 }, 00:32:24.297 "method": "bdev_nvme_attach_controller" 00:32:24.297 } 00:32:24.297 EOF 00:32:24.297 )") 00:32:24.297 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.297 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:32:24.297 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:24.297 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:24.297 03:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:24.554 "params": { 00:32:24.554 "name": "Nvme0", 00:32:24.554 "trtype": "tcp", 00:32:24.554 "traddr": "10.0.0.2", 00:32:24.554 "adrfam": "ipv4", 00:32:24.554 "trsvcid": "4420", 00:32:24.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:24.554 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:24.554 "hdgst": false, 00:32:24.554 "ddgst": false 00:32:24.554 }, 00:32:24.554 "method": "bdev_nvme_attach_controller" 00:32:24.554 }' 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:24.554 03:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:24.554 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:24.554 ... 00:32:24.554 fio-3.35 00:32:24.554 Starting 3 threads 00:32:24.835 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.110 00:32:30.110 filename0: (groupid=0, jobs=1): err= 0: pid=498824: Mon May 13 03:13:20 2024 00:32:30.110 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(115MiB/5009msec) 00:32:30.110 slat (nsec): min=4686, max=28955, avg=12074.91, stdev=2810.40 00:32:30.110 clat (usec): min=7235, max=58512, avg=16346.74, stdev=12435.98 00:32:30.110 lat (usec): min=7248, max=58526, avg=16358.81, stdev=12436.18 00:32:30.110 clat percentiles (usec): 00:32:30.110 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9765], 00:32:30.110 | 30.00th=[10945], 40.00th=[12518], 50.00th=[13173], 60.00th=[13829], 00:32:30.110 | 70.00th=[14353], 80.00th=[15139], 90.00th=[17695], 95.00th=[53740], 00:32:30.110 | 99.00th=[56361], 99.50th=[56886], 99.90th=[58459], 99.95th=[58459], 00:32:30.110 | 99.99th=[58459] 00:32:30.110 bw ( KiB/s): min=14592, max=31488, per=33.03%, avg=23424.00, stdev=5249.56, samples=10 00:32:30.110 iops : min= 114, max= 246, avg=183.00, stdev=41.01, samples=10 00:32:30.110 lat (msec) : 10=23.09%, 20=67.43%, 50=0.11%, 100=9.37% 00:32:30.110 cpu : usr=90.60%, sys=7.73%, ctx=108, majf=0, minf=109 00:32:30.110 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:30.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.110 issued rwts: total=918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.110 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:30.111 filename0: (groupid=0, jobs=1): err= 0: pid=498825: Mon May 13 03:13:20 2024 00:32:30.111 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(103MiB/5007msec) 00:32:30.111 slat (nsec): min=5739, max=31297, avg=13101.17, stdev=3009.91 00:32:30.111 clat (usec): min=6229, max=60426, avg=18185.38, stdev=14074.83 00:32:30.111 lat (usec): min=6242, max=60439, avg=18198.48, stdev=14075.00 00:32:30.111 clat percentiles (usec): 00:32:30.111 | 1.00th=[ 7504], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9896], 00:32:30.111 | 30.00th=[12256], 40.00th=[13173], 50.00th=[13960], 60.00th=[14746], 00:32:30.111 | 70.00th=[15401], 80.00th=[16712], 90.00th=[53740], 95.00th=[55313], 00:32:30.111 | 99.00th=[57410], 99.50th=[57934], 99.90th=[60556], 99.95th=[60556], 00:32:30.111 | 99.99th=[60556] 00:32:30.111 bw ( KiB/s): min=15360, max=25600, per=29.68%, avg=21043.20, stdev=3385.06, samples=10 00:32:30.111 iops : min= 120, max= 200, avg=164.40, stdev=26.45, samples=10 00:32:30.111 lat (msec) : 10=20.48%, 20=67.03%, 50=0.36%, 100=12.12% 00:32:30.111 cpu : usr=90.49%, sys=7.45%, ctx=175, majf=0, minf=109 00:32:30.111 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:30.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.111 issued rwts: total=825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.111 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:30.111 filename0: (groupid=0, jobs=1): err= 0: pid=498826: Mon May 13 03:13:20 2024 00:32:30.111 read: IOPS=206, BW=25.9MiB/s (27.1MB/s)(130MiB/5022msec) 00:32:30.111 slat (nsec): min=4916, max=27041, avg=12723.98, stdev=2345.18 00:32:30.111 clat (usec): min=6579, max=95188, avg=14480.23, stdev=13215.82 00:32:30.111 lat (usec): min=6592, max=95200, avg=14492.95, stdev=13215.76 00:32:30.111 clat percentiles (usec): 00:32:30.111 | 1.00th=[ 6980], 5.00th=[ 7570], 10.00th=[ 7963], 20.00th=[ 9110], 00:32:30.111 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11076], 00:32:30.111 | 70.00th=[11469], 80.00th=[11994], 90.00th=[47973], 95.00th=[51643], 00:32:30.111 | 99.00th=[53216], 99.50th=[54264], 99.90th=[92799], 99.95th=[94897], 00:32:30.111 | 99.99th=[94897] 00:32:30.111 bw ( KiB/s): min=19456, max=31744, per=37.41%, avg=26526.00, stdev=4454.30, samples=10 00:32:30.111 iops : min= 152, max= 248, avg=207.20, stdev=34.84, samples=10 00:32:30.111 lat (msec) : 10=42.06%, 20=47.93%, 50=1.15%, 100=8.85% 00:32:30.111 cpu : usr=91.79%, sys=7.47%, ctx=8, majf=0, minf=71 00:32:30.111 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:30.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.111 issued rwts: total=1039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.111 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:30.111 00:32:30.111 Run status group 0 (all jobs): 00:32:30.111 READ: bw=69.2MiB/s (72.6MB/s), 20.6MiB/s-25.9MiB/s (21.6MB/s-27.1MB/s), io=348MiB (365MB), run=5007-5022msec 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.370 bdev_null0 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.370 [2024-05-13 03:13:21.074085] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.370 bdev_null1 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.370 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.371 bdev_null2 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:30.371 { 00:32:30.371 "params": { 00:32:30.371 "name": "Nvme$subsystem", 00:32:30.371 "trtype": "$TEST_TRANSPORT", 00:32:30.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:30.371 "adrfam": "ipv4", 00:32:30.371 "trsvcid": "$NVMF_PORT", 00:32:30.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:30.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:30.371 "hdgst": ${hdgst:-false}, 00:32:30.371 "ddgst": ${ddgst:-false} 00:32:30.371 }, 00:32:30.371 "method": "bdev_nvme_attach_controller" 00:32:30.371 } 00:32:30.371 EOF 00:32:30.371 )") 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:30.371 { 00:32:30.371 "params": { 00:32:30.371 "name": "Nvme$subsystem", 00:32:30.371 "trtype": "$TEST_TRANSPORT", 00:32:30.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:30.371 "adrfam": "ipv4", 00:32:30.371 "trsvcid": "$NVMF_PORT", 00:32:30.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:30.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:30.371 "hdgst": ${hdgst:-false}, 00:32:30.371 "ddgst": ${ddgst:-false} 00:32:30.371 }, 00:32:30.371 "method": "bdev_nvme_attach_controller" 00:32:30.371 } 00:32:30.371 EOF 00:32:30.371 )") 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:30.371 { 00:32:30.371 "params": { 00:32:30.371 "name": "Nvme$subsystem", 00:32:30.371 "trtype": "$TEST_TRANSPORT", 00:32:30.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:30.371 "adrfam": "ipv4", 00:32:30.371 "trsvcid": "$NVMF_PORT", 00:32:30.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:30.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:30.371 "hdgst": ${hdgst:-false}, 00:32:30.371 "ddgst": ${ddgst:-false} 00:32:30.371 }, 00:32:30.371 "method": "bdev_nvme_attach_controller" 00:32:30.371 } 00:32:30.371 EOF 00:32:30.371 )") 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:30.371 "params": { 00:32:30.371 "name": "Nvme0", 00:32:30.371 "trtype": "tcp", 00:32:30.371 "traddr": "10.0.0.2", 00:32:30.371 "adrfam": "ipv4", 00:32:30.371 "trsvcid": "4420", 00:32:30.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:30.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:30.371 "hdgst": false, 00:32:30.371 "ddgst": false 00:32:30.371 }, 00:32:30.371 "method": "bdev_nvme_attach_controller" 00:32:30.371 },{ 00:32:30.371 "params": { 00:32:30.371 "name": "Nvme1", 00:32:30.371 "trtype": "tcp", 00:32:30.371 "traddr": "10.0.0.2", 00:32:30.371 "adrfam": "ipv4", 00:32:30.371 "trsvcid": "4420", 00:32:30.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:30.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:30.371 "hdgst": false, 00:32:30.371 "ddgst": false 00:32:30.371 }, 00:32:30.371 "method": "bdev_nvme_attach_controller" 00:32:30.371 },{ 00:32:30.371 "params": { 00:32:30.371 "name": "Nvme2", 00:32:30.371 "trtype": "tcp", 00:32:30.371 "traddr": "10.0.0.2", 00:32:30.371 "adrfam": "ipv4", 00:32:30.371 "trsvcid": "4420", 00:32:30.371 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:30.371 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:30.371 "hdgst": false, 00:32:30.371 "ddgst": false 00:32:30.371 }, 00:32:30.371 "method": "bdev_nvme_attach_controller" 00:32:30.371 }' 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:30.371 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:30.629 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:30.629 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:30.630 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:30.630 03:13:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:30.630 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:30.630 ... 00:32:30.630 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:30.630 ... 00:32:30.630 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:30.630 ... 00:32:30.630 fio-3.35 00:32:30.630 Starting 24 threads 00:32:30.888 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.089 00:32:43.089 filename0: (groupid=0, jobs=1): err= 0: pid=499691: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=481, BW=1926KiB/s (1973kB/s)(18.8MiB/10013msec) 00:32:43.089 slat (usec): min=8, max=198, avg=45.79, stdev=24.09 00:32:43.089 clat (usec): min=12332, max=75506, avg=32866.50, stdev=4539.51 00:32:43.089 lat (usec): min=12370, max=75541, avg=32912.29, stdev=4538.07 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[15664], 5.00th=[30278], 10.00th=[31065], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:43.089 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34866], 95.00th=[38536], 00:32:43.089 | 99.00th=[49546], 99.50th=[58459], 99.90th=[68682], 99.95th=[74974], 00:32:43.089 | 99.99th=[76022] 00:32:43.089 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1922.32, stdev=97.66, samples=19 00:32:43.089 iops : min= 416, max= 512, avg=480.58, stdev=24.42, samples=19 00:32:43.089 lat (msec) : 20=1.62%, 50=97.43%, 100=0.95% 00:32:43.089 cpu : usr=98.09%, sys=1.49%, ctx=14, majf=0, minf=9 00:32:43.089 IO depths : 1=2.5%, 2=8.0%, 4=22.5%, 8=56.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename0: (groupid=0, jobs=1): err= 0: pid=499692: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10027msec) 00:32:43.089 slat (usec): min=7, max=153, avg=55.50, stdev=25.61 00:32:43.089 clat (usec): min=11060, max=61442, avg=33012.30, stdev=5181.92 00:32:43.089 lat (usec): min=11158, max=61462, avg=33067.81, stdev=5182.00 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[15008], 5.00th=[28967], 10.00th=[30802], 20.00th=[31589], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:43.089 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35390], 95.00th=[43779], 00:32:43.089 | 99.00th=[53740], 99.50th=[54789], 99.90th=[61080], 99.95th=[61080], 00:32:43.089 | 99.99th=[61604] 00:32:43.089 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1913.15, stdev=81.38, samples=20 00:32:43.089 iops : min= 416, max= 512, avg=478.25, stdev=20.34, samples=20 00:32:43.089 lat (msec) : 20=2.35%, 50=95.79%, 100=1.85% 00:32:43.089 cpu : usr=98.06%, sys=1.48%, ctx=14, majf=0, minf=10 00:32:43.089 IO depths : 1=3.0%, 2=6.8%, 4=20.4%, 8=59.5%, 16=10.4%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename0: (groupid=0, jobs=1): err= 0: pid=499693: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=485, BW=1943KiB/s (1989kB/s)(19.0MiB/10002msec) 00:32:43.089 slat (usec): min=8, max=477, avg=28.69, stdev=16.59 00:32:43.089 clat (usec): min=20509, max=52998, avg=32727.36, stdev=2245.65 00:32:43.089 lat (usec): min=20552, max=53021, avg=32756.05, stdev=2245.55 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[25560], 5.00th=[30540], 10.00th=[31327], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:43.089 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35914], 00:32:43.089 | 99.00th=[41157], 99.50th=[43254], 99.90th=[52691], 99.95th=[53216], 00:32:43.089 | 99.99th=[53216] 00:32:43.089 bw ( KiB/s): min= 1872, max= 2048, per=4.20%, avg=1944.21, stdev=53.01, samples=19 00:32:43.089 iops : min= 468, max= 512, avg=486.05, stdev=13.25, samples=19 00:32:43.089 lat (msec) : 50=99.88%, 100=0.12% 00:32:43.089 cpu : usr=96.49%, sys=2.39%, ctx=139, majf=0, minf=9 00:32:43.089 IO depths : 1=2.8%, 2=5.9%, 4=19.3%, 8=62.2%, 16=9.8%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename0: (groupid=0, jobs=1): err= 0: pid=499694: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=481, BW=1927KiB/s (1973kB/s)(18.8MiB/10009msec) 00:32:43.089 slat (usec): min=7, max=116, avg=31.55, stdev=18.47 00:32:43.089 clat (usec): min=8055, max=61955, avg=33034.44, stdev=3959.49 00:32:43.089 lat (usec): min=8087, max=61986, avg=33065.99, stdev=3959.31 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[20055], 5.00th=[30278], 10.00th=[31327], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:43.089 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35390], 95.00th=[38536], 00:32:43.089 | 99.00th=[48497], 99.50th=[53216], 99.90th=[61604], 99.95th=[62129], 00:32:43.089 | 99.99th=[62129] 00:32:43.089 bw ( KiB/s): min= 1779, max= 2064, per=4.16%, avg=1924.20, stdev=76.90, samples=20 00:32:43.089 iops : min= 444, max= 516, avg=480.90, stdev=19.33, samples=20 00:32:43.089 lat (msec) : 10=0.17%, 20=0.73%, 50=98.34%, 100=0.77% 00:32:43.089 cpu : usr=90.34%, sys=4.48%, ctx=335, majf=0, minf=9 00:32:43.089 IO depths : 1=0.3%, 2=1.1%, 4=9.0%, 8=74.3%, 16=15.3%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=91.4%, 8=5.8%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename0: (groupid=0, jobs=1): err= 0: pid=499695: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=483, BW=1933KiB/s (1979kB/s)(18.9MiB/10003msec) 00:32:43.089 slat (usec): min=8, max=154, avg=37.51, stdev=18.76 00:32:43.089 clat (usec): min=14800, max=57578, avg=32793.04, stdev=2875.22 00:32:43.089 lat (usec): min=14822, max=57613, avg=32830.55, stdev=2874.69 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[28181], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:43.089 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35914], 00:32:43.089 | 99.00th=[45351], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:32:43.089 | 99.99th=[57410] 00:32:43.089 bw ( KiB/s): min= 1760, max= 2048, per=4.16%, avg=1927.79, stdev=65.18, samples=19 00:32:43.089 iops : min= 440, max= 512, avg=481.79, stdev=16.31, samples=19 00:32:43.089 lat (msec) : 20=0.46%, 50=98.94%, 100=0.60% 00:32:43.089 cpu : usr=97.53%, sys=1.71%, ctx=159, majf=0, minf=9 00:32:43.089 IO depths : 1=4.4%, 2=9.0%, 4=19.5%, 8=57.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=93.1%, 8=2.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename0: (groupid=0, jobs=1): err= 0: pid=499696: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:32:43.089 slat (usec): min=8, max=123, avg=39.89, stdev=17.87 00:32:43.089 clat (usec): min=20421, max=73983, avg=32688.44, stdev=1954.12 00:32:43.089 lat (usec): min=20458, max=74003, avg=32728.33, stdev=1952.80 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[29492], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:43.089 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:32:43.089 | 99.00th=[37487], 99.50th=[41157], 99.90th=[52691], 99.95th=[73925], 00:32:43.089 | 99.99th=[73925] 00:32:43.089 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1933.47, stdev=72.59, samples=19 00:32:43.089 iops : min= 448, max= 512, avg=483.37, stdev=18.15, samples=19 00:32:43.089 lat (msec) : 50=99.67%, 100=0.33% 00:32:43.089 cpu : usr=98.31%, sys=1.29%, ctx=16, majf=0, minf=9 00:32:43.089 IO depths : 1=5.9%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename0: (groupid=0, jobs=1): err= 0: pid=499697: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:32:43.089 slat (usec): min=8, max=175, avg=36.73, stdev=17.26 00:32:43.089 clat (usec): min=28370, max=52687, avg=32708.37, stdev=1660.62 00:32:43.089 lat (usec): min=28380, max=52709, avg=32745.09, stdev=1659.15 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[30016], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:43.089 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:32:43.089 | 99.00th=[37487], 99.50th=[41157], 99.90th=[52691], 99.95th=[52691], 00:32:43.089 | 99.99th=[52691] 00:32:43.089 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1933.47, stdev=58.73, samples=19 00:32:43.089 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:32:43.089 lat (msec) : 50=99.67%, 100=0.33% 00:32:43.089 cpu : usr=98.17%, sys=1.42%, ctx=13, majf=0, minf=9 00:32:43.089 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename0: (groupid=0, jobs=1): err= 0: pid=499698: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.8MiB/10008msec) 00:32:43.089 slat (usec): min=8, max=134, avg=34.50, stdev=19.73 00:32:43.089 clat (usec): min=6445, max=69983, avg=32986.94, stdev=4321.90 00:32:43.089 lat (usec): min=6455, max=70003, avg=33021.44, stdev=4321.21 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[19530], 5.00th=[30016], 10.00th=[31327], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:43.089 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34866], 95.00th=[39060], 00:32:43.089 | 99.00th=[51643], 99.50th=[54264], 99.90th=[69731], 99.95th=[69731], 00:32:43.089 | 99.99th=[69731] 00:32:43.089 bw ( KiB/s): min= 1683, max= 2108, per=4.16%, avg=1925.45, stdev=104.54, samples=20 00:32:43.089 iops : min= 420, max= 527, avg=481.25, stdev=26.27, samples=20 00:32:43.089 lat (msec) : 10=0.29%, 20=0.75%, 50=97.64%, 100=1.33% 00:32:43.089 cpu : usr=96.68%, sys=2.30%, ctx=197, majf=0, minf=9 00:32:43.089 IO depths : 1=0.4%, 2=1.6%, 4=12.8%, 8=71.4%, 16=13.8%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=92.1%, 8=3.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename1: (groupid=0, jobs=1): err= 0: pid=499699: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=475, BW=1901KiB/s (1947kB/s)(18.6MiB/10008msec) 00:32:43.089 slat (usec): min=7, max=521, avg=29.40, stdev=20.83 00:32:43.089 clat (usec): min=9679, max=68622, avg=33488.90, stdev=5670.31 00:32:43.089 lat (usec): min=9693, max=68655, avg=33518.31, stdev=5670.38 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[15008], 5.00th=[28443], 10.00th=[31065], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:43.089 | 70.00th=[33162], 80.00th=[33817], 90.00th=[38536], 95.00th=[45351], 00:32:43.089 | 99.00th=[54264], 99.50th=[56886], 99.90th=[68682], 99.95th=[68682], 00:32:43.089 | 99.99th=[68682] 00:32:43.089 bw ( KiB/s): min= 1664, max= 1992, per=4.08%, avg=1890.68, stdev=76.30, samples=19 00:32:43.089 iops : min= 416, max= 498, avg=472.63, stdev=19.07, samples=19 00:32:43.089 lat (msec) : 10=0.04%, 20=2.06%, 50=95.77%, 100=2.12% 00:32:43.089 cpu : usr=88.99%, sys=4.92%, ctx=204, majf=0, minf=9 00:32:43.089 IO depths : 1=0.4%, 2=1.4%, 4=12.9%, 8=71.6%, 16=13.8%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=92.2%, 8=3.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename1: (groupid=0, jobs=1): err= 0: pid=499700: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.7MiB/10003msec) 00:32:43.089 slat (usec): min=8, max=1144, avg=27.95, stdev=33.20 00:32:43.089 clat (usec): min=10475, max=63174, avg=33238.25, stdev=4719.52 00:32:43.089 lat (usec): min=10494, max=63204, avg=33266.20, stdev=4719.01 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[19268], 5.00th=[29492], 10.00th=[31327], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:43.089 | 70.00th=[33162], 80.00th=[33817], 90.00th=[35914], 95.00th=[42206], 00:32:43.089 | 99.00th=[53740], 99.50th=[56361], 99.90th=[63177], 99.95th=[63177], 00:32:43.089 | 99.99th=[63177] 00:32:43.089 bw ( KiB/s): min= 1744, max= 2052, per=4.11%, avg=1904.68, stdev=79.67, samples=19 00:32:43.089 iops : min= 436, max= 513, avg=476.05, stdev=19.92, samples=19 00:32:43.089 lat (msec) : 20=1.46%, 50=97.06%, 100=1.48% 00:32:43.089 cpu : usr=91.82%, sys=3.97%, ctx=468, majf=0, minf=9 00:32:43.089 IO depths : 1=0.3%, 2=0.6%, 4=5.6%, 8=79.7%, 16=13.9%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=89.4%, 8=6.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename1: (groupid=0, jobs=1): err= 0: pid=499701: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:32:43.089 slat (usec): min=8, max=706, avg=40.84, stdev=24.38 00:32:43.089 clat (usec): min=17035, max=63789, avg=32675.76, stdev=2215.77 00:32:43.089 lat (usec): min=17081, max=63820, avg=32716.60, stdev=2215.78 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[28705], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:43.089 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:32:43.089 | 99.00th=[41157], 99.50th=[45876], 99.90th=[52167], 99.95th=[63177], 00:32:43.089 | 99.99th=[63701] 00:32:43.089 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1933.47, stdev=72.59, samples=19 00:32:43.089 iops : min= 448, max= 512, avg=483.37, stdev=18.15, samples=19 00:32:43.089 lat (msec) : 20=0.29%, 50=99.34%, 100=0.37% 00:32:43.089 cpu : usr=89.61%, sys=4.31%, ctx=223, majf=0, minf=9 00:32:43.089 IO depths : 1=5.5%, 2=11.4%, 4=24.4%, 8=51.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename1: (groupid=0, jobs=1): err= 0: pid=499702: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=466, BW=1865KiB/s (1909kB/s)(18.2MiB/10009msec) 00:32:43.089 slat (usec): min=8, max=116, avg=37.15, stdev=19.37 00:32:43.089 clat (usec): min=8027, max=61967, avg=34010.47, stdev=5024.89 00:32:43.089 lat (usec): min=8050, max=61988, avg=34047.62, stdev=5021.47 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[21365], 5.00th=[31065], 10.00th=[31589], 20.00th=[32113], 00:32:43.089 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:32:43.089 | 70.00th=[33162], 80.00th=[34341], 90.00th=[40109], 95.00th=[45876], 00:32:43.089 | 99.00th=[51643], 99.50th=[55837], 99.90th=[62129], 99.95th=[62129], 00:32:43.089 | 99.99th=[62129] 00:32:43.089 bw ( KiB/s): min= 1440, max= 2052, per=4.02%, avg=1862.65, stdev=163.06, samples=20 00:32:43.089 iops : min= 360, max= 513, avg=465.55, stdev=40.78, samples=20 00:32:43.089 lat (msec) : 10=0.21%, 20=0.54%, 50=97.62%, 100=1.63% 00:32:43.089 cpu : usr=98.14%, sys=1.45%, ctx=16, majf=0, minf=9 00:32:43.089 IO depths : 1=1.9%, 2=6.8%, 4=22.5%, 8=58.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename1: (groupid=0, jobs=1): err= 0: pid=499703: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=482, BW=1929KiB/s (1976kB/s)(18.9MiB/10009msec) 00:32:43.089 slat (usec): min=8, max=131, avg=36.06, stdev=19.01 00:32:43.089 clat (usec): min=12113, max=95336, avg=32884.38, stdev=4784.36 00:32:43.089 lat (usec): min=12124, max=95356, avg=32920.44, stdev=4784.90 00:32:43.089 clat percentiles (usec): 00:32:43.089 | 1.00th=[19792], 5.00th=[28443], 10.00th=[31065], 20.00th=[31851], 00:32:43.089 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:43.089 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34866], 95.00th=[38536], 00:32:43.089 | 99.00th=[48497], 99.50th=[51119], 99.90th=[79168], 99.95th=[94897], 00:32:43.089 | 99.99th=[94897] 00:32:43.089 bw ( KiB/s): min= 1648, max= 2208, per=4.17%, avg=1931.79, stdev=104.35, samples=19 00:32:43.089 iops : min= 412, max= 552, avg=482.95, stdev=26.09, samples=19 00:32:43.089 lat (msec) : 20=1.04%, 50=98.34%, 100=0.62% 00:32:43.089 cpu : usr=97.37%, sys=1.80%, ctx=106, majf=0, minf=9 00:32:43.089 IO depths : 1=3.1%, 2=6.5%, 4=19.4%, 8=61.2%, 16=9.7%, 32=0.0%, >=64=0.0% 00:32:43.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 complete : 0=0.0%, 4=93.2%, 8=1.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.089 issued rwts: total=4828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.089 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.089 filename1: (groupid=0, jobs=1): err= 0: pid=499704: Mon May 13 03:13:32 2024 00:32:43.089 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10008msec) 00:32:43.089 slat (usec): min=8, max=136, avg=38.25, stdev=19.92 00:32:43.090 clat (usec): min=16773, max=64520, avg=32751.11, stdev=3205.88 00:32:43.090 lat (usec): min=16824, max=64547, avg=32789.36, stdev=3205.76 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[22676], 5.00th=[30540], 10.00th=[31327], 20.00th=[31851], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:43.090 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[35390], 00:32:43.090 | 99.00th=[46924], 99.50th=[48497], 99.90th=[64226], 99.95th=[64226], 00:32:43.090 | 99.99th=[64750] 00:32:43.090 bw ( KiB/s): min= 1776, max= 2048, per=4.17%, avg=1931.79, stdev=67.22, samples=19 00:32:43.090 iops : min= 444, max= 512, avg=482.95, stdev=16.80, samples=19 00:32:43.090 lat (msec) : 20=0.66%, 50=98.93%, 100=0.41% 00:32:43.090 cpu : usr=98.09%, sys=1.47%, ctx=14, majf=0, minf=9 00:32:43.090 IO depths : 1=3.3%, 2=8.2%, 4=21.2%, 8=57.9%, 16=9.4%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename1: (groupid=0, jobs=1): err= 0: pid=499705: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=484, BW=1938KiB/s (1984kB/s)(19.0MiB/10025msec) 00:32:43.090 slat (usec): min=8, max=1070, avg=35.01, stdev=29.84 00:32:43.090 clat (usec): min=8264, max=64493, avg=32745.52, stdev=4692.49 00:32:43.090 lat (usec): min=8276, max=64529, avg=32780.53, stdev=4691.58 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[15795], 5.00th=[29230], 10.00th=[31065], 20.00th=[31589], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:43.090 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[38011], 00:32:43.090 | 99.00th=[51119], 99.50th=[55313], 99.90th=[64226], 99.95th=[64226], 00:32:43.090 | 99.99th=[64750] 00:32:43.090 bw ( KiB/s): min= 1788, max= 2048, per=4.18%, avg=1935.55, stdev=74.55, samples=20 00:32:43.090 iops : min= 447, max= 512, avg=483.85, stdev=18.62, samples=20 00:32:43.090 lat (msec) : 10=0.19%, 20=1.94%, 50=96.48%, 100=1.40% 00:32:43.090 cpu : usr=88.44%, sys=4.88%, ctx=101, majf=0, minf=10 00:32:43.090 IO depths : 1=3.2%, 2=7.2%, 4=20.8%, 8=59.1%, 16=9.7%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename1: (groupid=0, jobs=1): err= 0: pid=499706: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=498, BW=1993KiB/s (2040kB/s)(19.5MiB/10021msec) 00:32:43.090 slat (usec): min=7, max=166, avg=30.20, stdev=18.16 00:32:43.090 clat (usec): min=8391, max=56880, avg=31872.18, stdev=4134.16 00:32:43.090 lat (usec): min=8399, max=56896, avg=31902.38, stdev=4135.26 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[13698], 5.00th=[23200], 10.00th=[28705], 20.00th=[31589], 00:32:43.090 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:43.090 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[36439], 00:32:43.090 | 99.00th=[42730], 99.50th=[46924], 99.90th=[54789], 99.95th=[55837], 00:32:43.090 | 99.99th=[56886] 00:32:43.090 bw ( KiB/s): min= 1916, max= 2248, per=4.31%, avg=1992.60, stdev=100.91, samples=20 00:32:43.090 iops : min= 479, max= 562, avg=498.15, stdev=25.23, samples=20 00:32:43.090 lat (msec) : 10=0.22%, 20=2.24%, 50=97.42%, 100=0.12% 00:32:43.090 cpu : usr=97.47%, sys=1.74%, ctx=67, majf=0, minf=9 00:32:43.090 IO depths : 1=2.9%, 2=6.3%, 4=18.3%, 8=62.2%, 16=10.3%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=93.0%, 8=2.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename2: (groupid=0, jobs=1): err= 0: pid=499707: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10025msec) 00:32:43.090 slat (usec): min=5, max=190, avg=32.51, stdev=20.75 00:32:43.090 clat (usec): min=2079, max=57535, avg=32289.19, stdev=4475.81 00:32:43.090 lat (usec): min=2090, max=57626, avg=32321.70, stdev=4478.10 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[11469], 5.00th=[27919], 10.00th=[31065], 20.00th=[31851], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:43.090 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[36439], 00:32:43.090 | 99.00th=[47449], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:32:43.090 | 99.99th=[57410] 00:32:43.090 bw ( KiB/s): min= 1792, max= 2400, per=4.25%, avg=1968.55, stdev=127.81, samples=20 00:32:43.090 iops : min= 448, max= 600, avg=492.10, stdev=31.93, samples=20 00:32:43.090 lat (msec) : 4=0.32%, 10=0.65%, 20=1.26%, 50=97.28%, 100=0.49% 00:32:43.090 cpu : usr=93.95%, sys=3.26%, ctx=53, majf=0, minf=11 00:32:43.090 IO depths : 1=2.8%, 2=6.5%, 4=17.9%, 8=62.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename2: (groupid=0, jobs=1): err= 0: pid=499708: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:32:43.090 slat (usec): min=8, max=119, avg=28.24, stdev=18.10 00:32:43.090 clat (usec): min=11104, max=63789, avg=32820.05, stdev=3259.09 00:32:43.090 lat (usec): min=11114, max=63820, avg=32848.29, stdev=3258.62 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[21890], 5.00th=[30540], 10.00th=[31327], 20.00th=[31851], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:43.090 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34866], 95.00th=[35914], 00:32:43.090 | 99.00th=[46400], 99.50th=[47449], 99.90th=[56886], 99.95th=[63177], 00:32:43.090 | 99.99th=[63701] 00:32:43.090 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1933.47, stdev=72.59, samples=19 00:32:43.090 iops : min= 448, max= 512, avg=483.37, stdev=18.15, samples=19 00:32:43.090 lat (msec) : 20=0.83%, 50=98.76%, 100=0.41% 00:32:43.090 cpu : usr=97.96%, sys=1.59%, ctx=17, majf=0, minf=10 00:32:43.090 IO depths : 1=3.3%, 2=7.9%, 4=20.8%, 8=58.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=93.3%, 8=1.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename2: (groupid=0, jobs=1): err= 0: pid=499709: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.1MiB/10026msec) 00:32:43.090 slat (usec): min=7, max=197, avg=41.42, stdev=23.89 00:32:43.090 clat (usec): min=10053, max=58718, avg=32525.31, stdev=2422.60 00:32:43.090 lat (usec): min=10110, max=58745, avg=32566.73, stdev=2420.25 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[22938], 5.00th=[30540], 10.00th=[31327], 20.00th=[31851], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:43.090 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:32:43.090 | 99.00th=[39584], 99.50th=[41681], 99.90th=[54789], 99.95th=[58459], 00:32:43.090 | 99.99th=[58459] 00:32:43.090 bw ( KiB/s): min= 1788, max= 2048, per=4.20%, avg=1944.95, stdev=67.14, samples=20 00:32:43.090 iops : min= 447, max= 512, avg=486.20, stdev=16.73, samples=20 00:32:43.090 lat (msec) : 20=0.74%, 50=99.14%, 100=0.12% 00:32:43.090 cpu : usr=97.96%, sys=1.59%, ctx=34, majf=0, minf=9 00:32:43.090 IO depths : 1=4.4%, 2=10.5%, 4=24.8%, 8=52.1%, 16=8.2%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename2: (groupid=0, jobs=1): err= 0: pid=499710: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=483, BW=1933KiB/s (1979kB/s)(18.9MiB/10008msec) 00:32:43.090 slat (usec): min=7, max=204, avg=40.44, stdev=22.25 00:32:43.090 clat (usec): min=7268, max=62581, avg=32829.01, stdev=4946.54 00:32:43.090 lat (usec): min=7277, max=62603, avg=32869.46, stdev=4946.48 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[16057], 5.00th=[28443], 10.00th=[31065], 20.00th=[31851], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:43.090 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34866], 95.00th=[40109], 00:32:43.090 | 99.00th=[54789], 99.50th=[60031], 99.90th=[62653], 99.95th=[62653], 00:32:43.090 | 99.99th=[62653] 00:32:43.090 bw ( KiB/s): min= 1779, max= 2052, per=4.17%, avg=1929.00, stdev=80.38, samples=20 00:32:43.090 iops : min= 444, max= 513, avg=482.10, stdev=20.21, samples=20 00:32:43.090 lat (msec) : 10=0.37%, 20=1.47%, 50=96.67%, 100=1.49% 00:32:43.090 cpu : usr=97.89%, sys=1.67%, ctx=35, majf=0, minf=9 00:32:43.090 IO depths : 1=1.7%, 2=4.4%, 4=16.8%, 8=65.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=92.6%, 8=2.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename2: (groupid=0, jobs=1): err= 0: pid=499711: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=482, BW=1929KiB/s (1975kB/s)(18.9MiB/10012msec) 00:32:43.090 slat (usec): min=8, max=153, avg=55.63, stdev=25.67 00:32:43.090 clat (usec): min=8531, max=65480, avg=32744.96, stdev=4912.55 00:32:43.090 lat (usec): min=8615, max=65532, avg=32800.59, stdev=4911.22 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[15139], 5.00th=[28443], 10.00th=[30802], 20.00th=[31589], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:32:43.090 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[39584], 00:32:43.090 | 99.00th=[51119], 99.50th=[57934], 99.90th=[65274], 99.95th=[65274], 00:32:43.090 | 99.99th=[65274] 00:32:43.090 bw ( KiB/s): min= 1664, max= 2091, per=4.16%, avg=1923.53, stdev=108.35, samples=19 00:32:43.090 iops : min= 416, max= 522, avg=480.84, stdev=27.02, samples=19 00:32:43.090 lat (msec) : 10=0.12%, 20=1.99%, 50=96.23%, 100=1.66% 00:32:43.090 cpu : usr=98.11%, sys=1.44%, ctx=12, majf=0, minf=9 00:32:43.090 IO depths : 1=3.0%, 2=7.9%, 4=22.1%, 8=57.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename2: (groupid=0, jobs=1): err= 0: pid=499712: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10008msec) 00:32:43.090 slat (usec): min=8, max=106, avg=38.60, stdev=17.93 00:32:43.090 clat (usec): min=16240, max=94554, avg=32811.35, stdev=3241.13 00:32:43.090 lat (usec): min=16302, max=94591, avg=32849.95, stdev=3240.92 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[29230], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:43.090 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:32:43.090 | 99.00th=[41157], 99.50th=[47449], 99.90th=[78119], 99.95th=[79168], 00:32:43.090 | 99.99th=[94897] 00:32:43.090 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1926.74, stdev=90.24, samples=19 00:32:43.090 iops : min= 416, max= 512, avg=481.68, stdev=22.56, samples=19 00:32:43.090 lat (msec) : 20=0.21%, 50=99.46%, 100=0.33% 00:32:43.090 cpu : usr=98.24%, sys=1.33%, ctx=8, majf=0, minf=9 00:32:43.090 IO depths : 1=5.9%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename2: (groupid=0, jobs=1): err= 0: pid=499713: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:32:43.090 slat (usec): min=8, max=103, avg=29.44, stdev=16.61 00:32:43.090 clat (usec): min=16650, max=56799, avg=32811.82, stdev=2176.45 00:32:43.090 lat (usec): min=16673, max=56823, avg=32841.26, stdev=2175.74 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[27919], 5.00th=[30802], 10.00th=[31589], 20.00th=[31851], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:43.090 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[35390], 00:32:43.090 | 99.00th=[41157], 99.50th=[45351], 99.90th=[52691], 99.95th=[52691], 00:32:43.090 | 99.99th=[56886] 00:32:43.090 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1933.47, stdev=58.00, samples=19 00:32:43.090 iops : min= 448, max= 512, avg=483.37, stdev=14.50, samples=19 00:32:43.090 lat (msec) : 20=0.21%, 50=99.42%, 100=0.37% 00:32:43.090 cpu : usr=97.94%, sys=1.62%, ctx=20, majf=0, minf=9 00:32:43.090 IO depths : 1=2.4%, 2=6.0%, 4=20.8%, 8=60.7%, 16=10.2%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 filename2: (groupid=0, jobs=1): err= 0: pid=499714: Mon May 13 03:13:32 2024 00:32:43.090 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10009msec) 00:32:43.090 slat (usec): min=8, max=185, avg=35.26, stdev=21.64 00:32:43.090 clat (usec): min=8348, max=88431, avg=33453.65, stdev=5659.20 00:32:43.090 lat (usec): min=8358, max=88462, avg=33488.91, stdev=5657.71 00:32:43.090 clat percentiles (usec): 00:32:43.090 | 1.00th=[19530], 5.00th=[29230], 10.00th=[31065], 20.00th=[31851], 00:32:43.090 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:43.090 | 70.00th=[32900], 80.00th=[33424], 90.00th=[37487], 95.00th=[45351], 00:32:43.090 | 99.00th=[53740], 99.50th=[62653], 99.90th=[73925], 99.95th=[87557], 00:32:43.090 | 99.99th=[88605] 00:32:43.090 bw ( KiB/s): min= 1776, max= 2084, per=4.10%, avg=1898.55, stdev=80.19, samples=20 00:32:43.090 iops : min= 444, max= 521, avg=474.45, stdev=20.12, samples=20 00:32:43.090 lat (msec) : 10=0.27%, 20=0.93%, 50=96.32%, 100=2.48% 00:32:43.090 cpu : usr=97.96%, sys=1.58%, ctx=15, majf=0, minf=9 00:32:43.090 IO depths : 1=0.6%, 2=2.1%, 4=12.2%, 8=70.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:32:43.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 complete : 0=0.0%, 4=91.9%, 8=4.6%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.090 issued rwts: total=4756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.090 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.090 00:32:43.090 Run status group 0 (all jobs): 00:32:43.090 READ: bw=45.2MiB/s (47.4MB/s), 1865KiB/s-1993KiB/s (1909kB/s-2040kB/s), io=453MiB (475MB), run=10002-10027msec 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 bdev_null0 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 [2024-05-13 03:13:32.971574] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 bdev_null1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.090 03:13:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.090 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.090 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:43.090 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:43.090 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:43.090 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:43.090 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:43.090 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:43.090 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.090 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:43.091 { 00:32:43.091 "params": { 00:32:43.091 "name": "Nvme$subsystem", 00:32:43.091 "trtype": "$TEST_TRANSPORT", 00:32:43.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.091 "adrfam": "ipv4", 00:32:43.091 "trsvcid": "$NVMF_PORT", 00:32:43.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.091 "hdgst": ${hdgst:-false}, 00:32:43.091 "ddgst": ${ddgst:-false} 00:32:43.091 }, 00:32:43.091 "method": "bdev_nvme_attach_controller" 00:32:43.091 } 00:32:43.091 EOF 00:32:43.091 )") 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:43.091 { 00:32:43.091 "params": { 00:32:43.091 "name": "Nvme$subsystem", 00:32:43.091 "trtype": "$TEST_TRANSPORT", 00:32:43.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.091 "adrfam": "ipv4", 00:32:43.091 "trsvcid": "$NVMF_PORT", 00:32:43.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.091 "hdgst": ${hdgst:-false}, 00:32:43.091 "ddgst": ${ddgst:-false} 00:32:43.091 }, 00:32:43.091 "method": "bdev_nvme_attach_controller" 00:32:43.091 } 00:32:43.091 EOF 00:32:43.091 )") 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:43.091 "params": { 00:32:43.091 "name": "Nvme0", 00:32:43.091 "trtype": "tcp", 00:32:43.091 "traddr": "10.0.0.2", 00:32:43.091 "adrfam": "ipv4", 00:32:43.091 "trsvcid": "4420", 00:32:43.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:43.091 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:43.091 "hdgst": false, 00:32:43.091 "ddgst": false 00:32:43.091 }, 00:32:43.091 "method": "bdev_nvme_attach_controller" 00:32:43.091 },{ 00:32:43.091 "params": { 00:32:43.091 "name": "Nvme1", 00:32:43.091 "trtype": "tcp", 00:32:43.091 "traddr": "10.0.0.2", 00:32:43.091 "adrfam": "ipv4", 00:32:43.091 "trsvcid": "4420", 00:32:43.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.091 "hdgst": false, 00:32:43.091 "ddgst": false 00:32:43.091 }, 00:32:43.091 "method": "bdev_nvme_attach_controller" 00:32:43.091 }' 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:43.091 03:13:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.091 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:43.091 ... 00:32:43.091 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:43.091 ... 00:32:43.091 fio-3.35 00:32:43.091 Starting 4 threads 00:32:43.091 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.363 00:32:48.363 filename0: (groupid=0, jobs=1): err= 0: pid=501090: Mon May 13 03:13:38 2024 00:32:48.363 read: IOPS=1146, BW=9171KiB/s (9391kB/s)(44.8MiB/5002msec) 00:32:48.363 slat (nsec): min=3734, max=32749, avg=11478.54, stdev=3551.30 00:32:48.363 clat (usec): min=3473, max=17065, avg=6943.29, stdev=1226.80 00:32:48.363 lat (usec): min=3487, max=17076, avg=6954.77, stdev=1226.98 00:32:48.363 clat percentiles (usec): 00:32:48.363 | 1.00th=[ 4752], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5997], 00:32:48.363 | 30.00th=[ 6063], 40.00th=[ 6456], 50.00th=[ 6849], 60.00th=[ 7111], 00:32:48.363 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9110], 00:32:48.363 | 99.00th=[10290], 99.50th=[11076], 99.90th=[13566], 99.95th=[13566], 00:32:48.363 | 99.99th=[17171] 00:32:48.363 bw ( KiB/s): min= 8592, max=10288, per=20.83%, avg=9169.60, stdev=634.13, samples=10 00:32:48.363 iops : min= 1074, max= 1286, avg=1146.20, stdev=79.27, samples=10 00:32:48.363 lat (msec) : 4=0.05%, 10=98.20%, 20=1.74% 00:32:48.363 cpu : usr=93.66%, sys=5.62%, ctx=17, majf=0, minf=9 00:32:48.363 IO depths : 1=0.1%, 2=3.0%, 4=68.9%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.363 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.363 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.363 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:48.363 filename0: (groupid=0, jobs=1): err= 0: pid=501091: Mon May 13 03:13:38 2024 00:32:48.363 read: IOPS=1920, BW=15.0MiB/s (15.7MB/s)(75.1MiB/5004msec) 00:32:48.363 slat (nsec): min=4126, max=36421, avg=9763.37, stdev=2360.59 00:32:48.363 clat (usec): min=2013, max=7759, avg=4133.73, stdev=896.63 00:32:48.363 lat (usec): min=2021, max=7775, avg=4143.50, stdev=896.26 00:32:48.363 clat percentiles (usec): 00:32:48.363 | 1.00th=[ 2474], 5.00th=[ 2769], 10.00th=[ 2933], 20.00th=[ 3326], 00:32:48.363 | 30.00th=[ 3621], 40.00th=[ 3916], 50.00th=[ 4113], 60.00th=[ 4293], 00:32:48.363 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5407], 95.00th=[ 5800], 00:32:48.363 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 7242], 99.95th=[ 7570], 00:32:48.363 | 99.99th=[ 7767] 00:32:48.363 bw ( KiB/s): min=14688, max=16000, per=34.90%, avg=15364.80, stdev=397.97, samples=10 00:32:48.363 iops : min= 1836, max= 2000, avg=1920.60, stdev=49.75, samples=10 00:32:48.363 lat (msec) : 4=43.66%, 10=56.34% 00:32:48.363 cpu : usr=92.54%, sys=6.74%, ctx=13, majf=0, minf=2 00:32:48.363 IO depths : 1=0.3%, 2=4.2%, 4=69.2%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.363 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.363 issued rwts: total=9608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.363 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:48.363 filename1: (groupid=0, jobs=1): err= 0: pid=501092: Mon May 13 03:13:38 2024 00:32:48.363 read: IOPS=1193, BW=9550KiB/s (9779kB/s)(46.6MiB/5001msec) 00:32:48.363 slat (nsec): min=3789, max=38919, avg=11082.23, stdev=3156.10 00:32:48.363 clat (usec): min=3182, max=15966, avg=6667.64, stdev=1267.22 00:32:48.363 lat (usec): min=3190, max=15994, avg=6678.73, stdev=1267.22 00:32:48.363 clat percentiles (usec): 00:32:48.363 | 1.00th=[ 4178], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 5669], 00:32:48.364 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6456], 60.00th=[ 6849], 00:32:48.364 | 70.00th=[ 7177], 80.00th=[ 7570], 90.00th=[ 8356], 95.00th=[ 8848], 00:32:48.364 | 99.00th=[10683], 99.50th=[10683], 99.90th=[12387], 99.95th=[12518], 00:32:48.364 | 99.99th=[15926] 00:32:48.364 bw ( KiB/s): min= 9040, max=10192, per=21.74%, avg=9573.33, stdev=440.07, samples=9 00:32:48.364 iops : min= 1130, max= 1274, avg=1196.67, stdev=55.01, samples=9 00:32:48.364 lat (msec) : 4=0.82%, 10=97.15%, 20=2.03% 00:32:48.364 cpu : usr=93.74%, sys=5.58%, ctx=84, majf=0, minf=9 00:32:48.364 IO depths : 1=0.1%, 2=4.1%, 4=66.8%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.364 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.364 issued rwts: total=5970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.364 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:48.364 filename1: (groupid=0, jobs=1): err= 0: pid=501093: Mon May 13 03:13:38 2024 00:32:48.364 read: IOPS=1244, BW=9957KiB/s (10.2MB/s)(48.7MiB/5004msec) 00:32:48.364 slat (nsec): min=4156, max=28564, avg=10722.65, stdev=2963.68 00:32:48.364 clat (usec): min=3249, max=14322, avg=6391.93, stdev=1028.49 00:32:48.364 lat (usec): min=3257, max=14335, avg=6402.65, stdev=1028.27 00:32:48.364 clat percentiles (usec): 00:32:48.364 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 5604], 00:32:48.364 | 30.00th=[ 5800], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6456], 00:32:48.364 | 70.00th=[ 6915], 80.00th=[ 7242], 90.00th=[ 7701], 95.00th=[ 8160], 00:32:48.364 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[11994], 99.95th=[11994], 00:32:48.364 | 99.99th=[14353] 00:32:48.364 bw ( KiB/s): min= 9552, max=10624, per=22.61%, avg=9953.90, stdev=408.38, samples=10 00:32:48.364 iops : min= 1194, max= 1328, avg=1244.20, stdev=51.06, samples=10 00:32:48.364 lat (msec) : 4=0.35%, 10=99.25%, 20=0.40% 00:32:48.364 cpu : usr=92.62%, sys=5.84%, ctx=124, majf=0, minf=0 00:32:48.364 IO depths : 1=0.3%, 2=6.4%, 4=66.8%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.364 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.364 issued rwts: total=6228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.364 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:48.364 00:32:48.364 Run status group 0 (all jobs): 00:32:48.364 READ: bw=43.0MiB/s (45.1MB/s), 9171KiB/s-15.0MiB/s (9391kB/s-15.7MB/s), io=215MiB (226MB), run=5001-5004msec 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:48.622 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.623 00:32:48.623 real 0m24.166s 00:32:48.623 user 4m27.460s 00:32:48.623 sys 0m8.803s 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:48.623 03:13:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:48.623 ************************************ 00:32:48.623 END TEST fio_dif_rand_params 00:32:48.623 ************************************ 00:32:48.623 03:13:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:48.623 03:13:39 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:48.623 03:13:39 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:48.623 03:13:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:48.623 ************************************ 00:32:48.623 START TEST fio_dif_digest 00:32:48.623 ************************************ 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:48.623 bdev_null0 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:48.623 [2024-05-13 03:13:39.316079] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:48.623 { 00:32:48.623 "params": { 00:32:48.623 "name": "Nvme$subsystem", 00:32:48.623 "trtype": "$TEST_TRANSPORT", 00:32:48.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:48.623 "adrfam": "ipv4", 00:32:48.623 "trsvcid": "$NVMF_PORT", 00:32:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:48.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:48.623 "hdgst": ${hdgst:-false}, 00:32:48.623 "ddgst": ${ddgst:-false} 00:32:48.623 }, 00:32:48.623 "method": "bdev_nvme_attach_controller" 00:32:48.623 } 00:32:48.623 EOF 00:32:48.623 )") 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:48.623 "params": { 00:32:48.623 "name": "Nvme0", 00:32:48.623 "trtype": "tcp", 00:32:48.623 "traddr": "10.0.0.2", 00:32:48.623 "adrfam": "ipv4", 00:32:48.623 "trsvcid": "4420", 00:32:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:48.623 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:48.623 "hdgst": true, 00:32:48.623 "ddgst": true 00:32:48.623 }, 00:32:48.623 "method": "bdev_nvme_attach_controller" 00:32:48.623 }' 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:48.623 03:13:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:48.881 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:48.881 ... 00:32:48.881 fio-3.35 00:32:48.881 Starting 3 threads 00:32:48.881 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.117 00:33:01.117 filename0: (groupid=0, jobs=1): err= 0: pid=501847: Mon May 13 03:13:50 2024 00:33:01.117 read: IOPS=138, BW=17.3MiB/s (18.1MB/s)(174MiB/10050msec) 00:33:01.117 slat (nsec): min=4743, max=38164, avg=12843.56, stdev=2703.57 00:33:01.117 clat (usec): min=10126, max=95477, avg=21625.41, stdev=7855.05 00:33:01.117 lat (usec): min=10137, max=95489, avg=21638.25, stdev=7855.21 00:33:01.117 clat percentiles (usec): 00:33:01.117 | 1.00th=[12387], 5.00th=[16188], 10.00th=[17433], 20.00th=[18744], 00:33:01.117 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20579], 60.00th=[21103], 00:33:01.117 | 70.00th=[21627], 80.00th=[22152], 90.00th=[23200], 95.00th=[24511], 00:33:01.117 | 99.00th=[61080], 99.50th=[62653], 99.90th=[64226], 99.95th=[95945], 00:33:01.117 | 99.99th=[95945] 00:33:01.117 bw ( KiB/s): min=13824, max=19712, per=28.61%, avg=17779.20, stdev=1590.01, samples=20 00:33:01.117 iops : min= 108, max= 154, avg=138.90, stdev=12.42, samples=20 00:33:01.117 lat (msec) : 20=40.19%, 50=56.29%, 100=3.52% 00:33:01.117 cpu : usr=91.42%, sys=8.14%, ctx=18, majf=0, minf=114 00:33:01.117 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.117 issued rwts: total=1391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:01.117 filename0: (groupid=0, jobs=1): err= 0: pid=501848: Mon May 13 03:13:50 2024 00:33:01.117 read: IOPS=194, BW=24.4MiB/s (25.6MB/s)(245MiB/10049msec) 00:33:01.117 slat (nsec): min=4512, max=36614, avg=13318.48, stdev=2606.99 00:33:01.117 clat (usec): min=7427, max=59452, avg=15348.82, stdev=5801.49 00:33:01.117 lat (usec): min=7440, max=59472, avg=15362.13, stdev=5801.51 00:33:01.117 clat percentiles (usec): 00:33:01.117 | 1.00th=[ 7963], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[12125], 00:33:01.117 | 30.00th=[13960], 40.00th=[15008], 50.00th=[15533], 60.00th=[15926], 00:33:01.117 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:33:01.117 | 99.00th=[54789], 99.50th=[57410], 99.90th=[58983], 99.95th=[59507], 00:33:01.117 | 99.99th=[59507] 00:33:01.117 bw ( KiB/s): min=20992, max=30208, per=40.32%, avg=25052.05, stdev=2315.39, samples=20 00:33:01.117 iops : min= 164, max= 236, avg=195.70, stdev=18.09, samples=20 00:33:01.117 lat (msec) : 10=5.62%, 20=92.75%, 50=0.05%, 100=1.58% 00:33:01.117 cpu : usr=90.96%, sys=8.47%, ctx=16, majf=0, minf=197 00:33:01.117 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.117 issued rwts: total=1959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:01.117 filename0: (groupid=0, jobs=1): err= 0: pid=501849: Mon May 13 03:13:50 2024 00:33:01.117 read: IOPS=152, BW=19.0MiB/s (20.0MB/s)(191MiB/10040msec) 00:33:01.117 slat (usec): min=4, max=626, avg=13.57, stdev=16.09 00:33:01.117 clat (usec): min=8614, max=60672, avg=19679.82, stdev=11414.86 00:33:01.117 lat (usec): min=8626, max=60685, avg=19693.39, stdev=11416.03 00:33:01.117 clat percentiles (usec): 00:33:01.117 | 1.00th=[10552], 5.00th=[12911], 10.00th=[14353], 20.00th=[15270], 00:33:01.117 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16581], 60.00th=[16909], 00:33:01.117 | 70.00th=[17433], 80.00th=[17957], 90.00th=[19268], 95.00th=[56361], 00:33:01.117 | 99.00th=[58983], 99.50th=[59507], 99.90th=[60556], 99.95th=[60556], 00:33:01.117 | 99.99th=[60556] 00:33:01.117 bw ( KiB/s): min=16384, max=23040, per=31.43%, avg=19530.80, stdev=2081.42, samples=20 00:33:01.117 iops : min= 128, max= 180, avg=152.55, stdev=16.26, samples=20 00:33:01.117 lat (msec) : 10=0.72%, 20=90.58%, 50=0.26%, 100=8.44% 00:33:01.117 cpu : usr=91.73%, sys=7.78%, ctx=26, majf=0, minf=143 00:33:01.117 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.117 issued rwts: total=1529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:01.117 00:33:01.117 Run status group 0 (all jobs): 00:33:01.117 READ: bw=60.7MiB/s (63.6MB/s), 17.3MiB/s-24.4MiB/s (18.1MB/s-25.6MB/s), io=610MiB (640MB), run=10040-10050msec 00:33:01.117 03:13:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:01.117 03:13:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:01.117 03:13:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:01.117 03:13:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:01.117 03:13:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:01.117 03:13:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:01.118 03:13:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.118 03:13:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.118 03:13:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.118 03:13:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:01.118 03:13:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.118 03:13:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.118 03:13:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.118 00:33:01.118 real 0m11.184s 00:33:01.118 user 0m28.701s 00:33:01.118 sys 0m2.708s 00:33:01.118 03:13:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:01.118 03:13:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.118 ************************************ 00:33:01.118 END TEST fio_dif_digest 00:33:01.118 ************************************ 00:33:01.118 03:13:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:01.118 03:13:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:01.118 rmmod nvme_tcp 00:33:01.118 rmmod nvme_fabrics 00:33:01.118 rmmod nvme_keyring 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 495908 ']' 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 495908 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 495908 ']' 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 495908 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 495908 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 495908' 00:33:01.118 killing process with pid 495908 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@965 -- # kill 495908 00:33:01.118 [2024-05-13 03:13:50.573372] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:01.118 03:13:50 nvmf_dif -- common/autotest_common.sh@970 -- # wait 495908 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:01.118 03:13:50 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:01.118 Waiting for block devices as requested 00:33:01.118 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:01.118 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:01.378 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:01.378 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:01.378 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:01.378 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:01.378 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:01.636 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:01.636 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:01.636 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:01.636 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:01.894 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:01.894 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:01.894 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:02.153 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:02.153 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:02.153 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:02.411 03:13:52 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:02.411 03:13:52 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:02.411 03:13:52 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:02.411 03:13:52 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:02.411 03:13:52 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.411 03:13:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:02.411 03:13:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.314 03:13:55 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:04.314 00:33:04.314 real 1m6.179s 00:33:04.314 user 6m19.783s 00:33:04.314 sys 0m22.140s 00:33:04.314 03:13:55 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:04.314 03:13:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.314 ************************************ 00:33:04.314 END TEST nvmf_dif 00:33:04.314 ************************************ 00:33:04.314 03:13:55 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:04.314 03:13:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:04.314 03:13:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:04.314 03:13:55 -- common/autotest_common.sh@10 -- # set +x 00:33:04.314 ************************************ 00:33:04.314 START TEST nvmf_abort_qd_sizes 00:33:04.314 ************************************ 00:33:04.314 03:13:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:04.314 * Looking for test storage... 00:33:04.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:04.314 03:13:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.314 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:04.314 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.314 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.315 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.315 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.315 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.315 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.315 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.315 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.315 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.315 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.573 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:04.573 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:04.573 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.573 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.573 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:33:04.574 03:13:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:06.476 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:06.476 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.476 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:06.477 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:06.477 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:06.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:33:06.477 00:33:06.477 --- 10.0.0.2 ping statistics --- 00:33:06.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.477 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:33:06.477 00:33:06.477 --- 10.0.0.1 ping statistics --- 00:33:06.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.477 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:06.477 03:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:07.854 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:07.854 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:07.854 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:07.854 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:07.854 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:07.854 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:07.854 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:07.854 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:07.854 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:07.854 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:07.854 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:07.854 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:07.854 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:07.854 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:07.854 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:07.854 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:08.791 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=506631 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 506631 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 506631 ']' 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:08.791 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:08.791 [2024-05-13 03:13:59.541381] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:33:08.791 [2024-05-13 03:13:59.541450] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.791 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.791 [2024-05-13 03:13:59.579971] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:09.049 [2024-05-13 03:13:59.611772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:09.049 [2024-05-13 03:13:59.703495] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.049 [2024-05-13 03:13:59.703553] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.049 [2024-05-13 03:13:59.703569] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.049 [2024-05-13 03:13:59.703582] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.049 [2024-05-13 03:13:59.703593] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.049 [2024-05-13 03:13:59.703675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.049 [2024-05-13 03:13:59.703732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:09.049 [2024-05-13 03:13:59.703773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:09.049 [2024-05-13 03:13:59.703775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:09.049 03:13:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:09.307 ************************************ 00:33:09.307 START TEST spdk_target_abort 00:33:09.307 ************************************ 00:33:09.307 03:13:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:33:09.307 03:13:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:09.307 03:13:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:33:09.307 03:13:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.307 03:13:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:12.585 spdk_targetn1 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:12.585 [2024-05-13 03:14:02.716540] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:12.585 [2024-05-13 03:14:02.748514] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:12.585 [2024-05-13 03:14:02.748822] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:12.585 03:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:12.585 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.108 Initializing NVMe Controllers 00:33:15.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:15.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:15.108 Initialization complete. Launching workers. 00:33:15.108 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8487, failed: 0 00:33:15.108 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1371, failed to submit 7116 00:33:15.108 success 854, unsuccess 517, failed 0 00:33:15.108 03:14:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:15.108 03:14:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:15.365 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.639 [2024-05-13 03:14:09.037732] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9ed0 is same with the state(5) to be set 00:33:18.639 Initializing NVMe Controllers 00:33:18.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:18.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:18.639 Initialization complete. Launching workers. 00:33:18.639 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8682, failed: 0 00:33:18.639 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7437 00:33:18.639 success 308, unsuccess 937, failed 0 00:33:18.639 03:14:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:18.639 03:14:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:18.639 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.947 Initializing NVMe Controllers 00:33:21.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:21.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:21.947 Initialization complete. Launching workers. 00:33:21.947 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31804, failed: 0 00:33:21.947 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2694, failed to submit 29110 00:33:21.947 success 526, unsuccess 2168, failed 0 00:33:21.947 03:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:21.947 03:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.947 03:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:21.947 03:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.947 03:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:21.947 03:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.947 03:14:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 506631 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 506631 ']' 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 506631 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 506631 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 506631' 00:33:23.318 killing process with pid 506631 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 506631 00:33:23.318 [2024-05-13 03:14:13.877033] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:23.318 03:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 506631 00:33:23.318 00:33:23.318 real 0m14.218s 00:33:23.318 user 0m53.541s 00:33:23.318 sys 0m2.811s 00:33:23.318 03:14:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:23.318 03:14:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.318 ************************************ 00:33:23.318 END TEST spdk_target_abort 00:33:23.318 ************************************ 00:33:23.318 03:14:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:23.318 03:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:23.318 03:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:23.318 03:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:23.576 ************************************ 00:33:23.576 START TEST kernel_target_abort 00:33:23.576 ************************************ 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:23.576 03:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:24.510 Waiting for block devices as requested 00:33:24.510 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:24.510 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:24.768 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:24.768 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:24.768 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:24.768 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:24.768 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:25.026 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:25.026 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:25.026 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:25.026 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:25.285 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:25.285 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:25.285 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:25.285 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:25.543 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:25.543 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:25.543 No valid GPT data, bailing 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:25.543 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:25.544 00:33:25.544 Discovery Log Number of Records 2, Generation counter 2 00:33:25.544 =====Discovery Log Entry 0====== 00:33:25.544 trtype: tcp 00:33:25.544 adrfam: ipv4 00:33:25.544 subtype: current discovery subsystem 00:33:25.544 treq: not specified, sq flow control disable supported 00:33:25.544 portid: 1 00:33:25.544 trsvcid: 4420 00:33:25.544 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:25.544 traddr: 10.0.0.1 00:33:25.544 eflags: none 00:33:25.544 sectype: none 00:33:25.544 =====Discovery Log Entry 1====== 00:33:25.544 trtype: tcp 00:33:25.544 adrfam: ipv4 00:33:25.544 subtype: nvme subsystem 00:33:25.544 treq: not specified, sq flow control disable supported 00:33:25.544 portid: 1 00:33:25.544 trsvcid: 4420 00:33:25.544 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:25.544 traddr: 10.0.0.1 00:33:25.544 eflags: none 00:33:25.544 sectype: none 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:25.544 03:14:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:25.802 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.081 Initializing NVMe Controllers 00:33:29.081 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:29.081 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:29.081 Initialization complete. Launching workers. 00:33:29.081 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 23525, failed: 0 00:33:29.081 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23525, failed to submit 0 00:33:29.081 success 0, unsuccess 23525, failed 0 00:33:29.081 03:14:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:29.081 03:14:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:29.081 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.358 Initializing NVMe Controllers 00:33:32.358 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:32.358 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:32.358 Initialization complete. Launching workers. 00:33:32.358 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52106, failed: 0 00:33:32.358 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13122, failed to submit 38984 00:33:32.358 success 0, unsuccess 13122, failed 0 00:33:32.358 03:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:32.358 03:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:32.358 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.881 Initializing NVMe Controllers 00:33:34.881 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:34.881 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:34.881 Initialization complete. Launching workers. 00:33:34.881 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48619, failed: 0 00:33:34.881 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12118, failed to submit 36501 00:33:34.881 success 0, unsuccess 12118, failed 0 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:34.881 03:14:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:35.814 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:35.814 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:35.814 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:35.814 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:35.814 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:35.814 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:35.814 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:35.814 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:35.814 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:35.815 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:36.073 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:36.073 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:36.073 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:36.073 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:36.073 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:36.073 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:37.008 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:37.008 00:33:37.008 real 0m13.603s 00:33:37.008 user 0m3.744s 00:33:37.008 sys 0m3.268s 00:33:37.008 03:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:37.008 03:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:37.008 ************************************ 00:33:37.008 END TEST kernel_target_abort 00:33:37.008 ************************************ 00:33:37.008 03:14:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:37.008 03:14:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:37.008 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:37.008 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:37.008 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:37.008 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:37.008 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:37.008 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:37.008 rmmod nvme_tcp 00:33:37.008 rmmod nvme_fabrics 00:33:37.008 rmmod nvme_keyring 00:33:37.008 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:37.266 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:37.266 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:37.266 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 506631 ']' 00:33:37.266 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 506631 00:33:37.266 03:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 506631 ']' 00:33:37.266 03:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 506631 00:33:37.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (506631) - No such process 00:33:37.266 03:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 506631 is not found' 00:33:37.266 Process with pid 506631 is not found 00:33:37.266 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:37.266 03:14:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:38.201 Waiting for block devices as requested 00:33:38.201 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:38.487 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:38.487 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:38.487 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:38.487 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:38.487 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:38.747 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:38.747 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:38.747 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:38.747 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:39.006 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:39.006 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:39.006 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:39.006 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:39.264 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:39.264 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:39.264 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:39.264 03:14:30 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:39.264 03:14:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:39.264 03:14:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:39.264 03:14:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:39.264 03:14:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.264 03:14:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:39.264 03:14:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.801 03:14:32 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:41.801 00:33:41.801 real 0m37.006s 00:33:41.801 user 0m59.281s 00:33:41.801 sys 0m9.379s 00:33:41.801 03:14:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:41.801 03:14:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:41.801 ************************************ 00:33:41.801 END TEST nvmf_abort_qd_sizes 00:33:41.801 ************************************ 00:33:41.801 03:14:32 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:41.801 03:14:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:41.801 03:14:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:41.801 03:14:32 -- common/autotest_common.sh@10 -- # set +x 00:33:41.801 ************************************ 00:33:41.801 START TEST keyring_file 00:33:41.801 ************************************ 00:33:41.801 03:14:32 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:41.801 * Looking for test storage... 00:33:41.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:41.801 03:14:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.801 03:14:32 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.801 03:14:32 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.801 03:14:32 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.801 03:14:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.801 03:14:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.801 03:14:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.801 03:14:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:41.801 03:14:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:41.801 03:14:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:41.801 03:14:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:41.801 03:14:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:41.801 03:14:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:41.801 03:14:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:41.801 03:14:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4oQiIgSx8R 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4oQiIgSx8R 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4oQiIgSx8R 00:33:41.801 03:14:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.4oQiIgSx8R 00:33:41.801 03:14:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NeSPivgrR7 00:33:41.801 03:14:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:41.801 03:14:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:41.802 03:14:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NeSPivgrR7 00:33:41.802 03:14:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NeSPivgrR7 00:33:41.802 03:14:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.NeSPivgrR7 00:33:41.802 03:14:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=512268 00:33:41.802 03:14:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:41.802 03:14:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 512268 00:33:41.802 03:14:32 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 512268 ']' 00:33:41.802 03:14:32 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.802 03:14:32 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:41.802 03:14:32 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.802 03:14:32 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:41.802 03:14:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:41.802 [2024-05-13 03:14:32.315483] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:33:41.802 [2024-05-13 03:14:32.315590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512268 ] 00:33:41.802 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.802 [2024-05-13 03:14:32.350321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:41.802 [2024-05-13 03:14:32.379515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.802 [2024-05-13 03:14:32.465066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:33:42.062 03:14:32 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:42.062 [2024-05-13 03:14:32.697619] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.062 null0 00:33:42.062 [2024-05-13 03:14:32.729625] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:42.062 [2024-05-13 03:14:32.729715] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:42.062 [2024-05-13 03:14:32.730200] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:42.062 [2024-05-13 03:14:32.737677] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.062 03:14:32 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:42.062 [2024-05-13 03:14:32.745666] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:42.062 request: 00:33:42.062 { 00:33:42.062 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:42.062 "secure_channel": false, 00:33:42.062 "listen_address": { 00:33:42.062 "trtype": "tcp", 00:33:42.062 "traddr": "127.0.0.1", 00:33:42.062 "trsvcid": "4420" 00:33:42.062 }, 00:33:42.062 "method": "nvmf_subsystem_add_listener", 00:33:42.062 "req_id": 1 00:33:42.062 } 00:33:42.062 Got JSON-RPC error response 00:33:42.062 response: 00:33:42.062 { 00:33:42.062 "code": -32602, 00:33:42.062 "message": "Invalid parameters" 00:33:42.062 } 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:42.062 03:14:32 keyring_file -- keyring/file.sh@46 -- # bperfpid=512383 00:33:42.062 03:14:32 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:42.062 03:14:32 keyring_file -- keyring/file.sh@48 -- # waitforlisten 512383 /var/tmp/bperf.sock 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 512383 ']' 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:42.062 03:14:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:42.062 [2024-05-13 03:14:32.787207] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:33:42.062 [2024-05-13 03:14:32.787293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512383 ] 00:33:42.062 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.062 [2024-05-13 03:14:32.817481] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:42.062 [2024-05-13 03:14:32.845315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.321 [2024-05-13 03:14:32.937100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.321 03:14:33 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:42.321 03:14:33 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:33:42.321 03:14:33 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4oQiIgSx8R 00:33:42.321 03:14:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4oQiIgSx8R 00:33:42.579 03:14:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NeSPivgrR7 00:33:42.579 03:14:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NeSPivgrR7 00:33:42.837 03:14:33 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:42.837 03:14:33 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:42.837 03:14:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.837 03:14:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:42.837 03:14:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.099 03:14:33 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.4oQiIgSx8R == \/\t\m\p\/\t\m\p\.\4\o\Q\i\I\g\S\x\8\R ]] 00:33:43.099 03:14:33 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:43.099 03:14:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:43.099 03:14:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:43.099 03:14:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:43.099 03:14:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.358 03:14:34 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NeSPivgrR7 == \/\t\m\p\/\t\m\p\.\N\e\S\P\i\v\g\r\R\7 ]] 00:33:43.358 03:14:34 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:43.358 03:14:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:43.358 03:14:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:43.358 03:14:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:43.358 03:14:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.358 03:14:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:43.616 03:14:34 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:43.616 03:14:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:43.616 03:14:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:43.616 03:14:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:43.616 03:14:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:43.616 03:14:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.616 03:14:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:43.874 03:14:34 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:43.874 03:14:34 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:43.874 03:14:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:44.132 [2024-05-13 03:14:34.773815] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:44.132 nvme0n1 00:33:44.132 03:14:34 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:44.132 03:14:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:44.132 03:14:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.132 03:14:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.132 03:14:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.132 03:14:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:44.389 03:14:35 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:44.389 03:14:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:44.390 03:14:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:44.390 03:14:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.390 03:14:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.390 03:14:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.390 03:14:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:44.648 03:14:35 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:44.648 03:14:35 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:44.648 Running I/O for 1 seconds... 00:33:46.026 00:33:46.026 Latency(us) 00:33:46.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.026 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:46.026 nvme0n1 : 1.02 2740.28 10.70 0.00 0.00 46301.57 6941.96 169325.61 00:33:46.026 =================================================================================================================== 00:33:46.026 Total : 2740.28 10.70 0.00 0.00 46301.57 6941.96 169325.61 00:33:46.026 0 00:33:46.026 03:14:36 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:46.026 03:14:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:46.026 03:14:36 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:46.026 03:14:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:46.026 03:14:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:46.026 03:14:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:46.026 03:14:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:46.026 03:14:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:46.285 03:14:36 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:46.285 03:14:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:46.285 03:14:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:46.285 03:14:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:46.285 03:14:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:46.285 03:14:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:46.285 03:14:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:46.542 03:14:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:46.542 03:14:37 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:46.542 03:14:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:46.542 03:14:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:46.542 03:14:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:46.542 03:14:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:46.542 03:14:37 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:46.542 03:14:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:46.542 03:14:37 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:46.542 03:14:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:46.801 [2024-05-13 03:14:37.454348] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:46.801 [2024-05-13 03:14:37.454693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7e3b0 (107): Transport endpoint is not connected 00:33:46.801 [2024-05-13 03:14:37.455686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7e3b0 (9): Bad file descriptor 00:33:46.801 [2024-05-13 03:14:37.456683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:46.801 [2024-05-13 03:14:37.456716] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:46.801 [2024-05-13 03:14:37.456733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:46.801 request: 00:33:46.801 { 00:33:46.801 "name": "nvme0", 00:33:46.801 "trtype": "tcp", 00:33:46.801 "traddr": "127.0.0.1", 00:33:46.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.801 "adrfam": "ipv4", 00:33:46.801 "trsvcid": "4420", 00:33:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.801 "psk": "key1", 00:33:46.801 "method": "bdev_nvme_attach_controller", 00:33:46.801 "req_id": 1 00:33:46.801 } 00:33:46.801 Got JSON-RPC error response 00:33:46.801 response: 00:33:46.801 { 00:33:46.801 "code": -32602, 00:33:46.801 "message": "Invalid parameters" 00:33:46.801 } 00:33:46.801 03:14:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:46.801 03:14:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:46.801 03:14:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:46.801 03:14:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:46.801 03:14:37 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:46.801 03:14:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:46.801 03:14:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:46.801 03:14:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:46.801 03:14:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:46.801 03:14:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:47.059 03:14:37 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:47.059 03:14:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:47.059 03:14:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:47.059 03:14:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:47.059 03:14:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:47.059 03:14:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.059 03:14:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:47.382 03:14:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:47.382 03:14:37 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:47.383 03:14:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:47.641 03:14:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:47.641 03:14:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:47.897 03:14:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:47.897 03:14:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.897 03:14:38 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:47.897 03:14:38 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:47.897 03:14:38 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.4oQiIgSx8R 00:33:47.897 03:14:38 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.4oQiIgSx8R 00:33:47.897 03:14:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:47.897 03:14:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.4oQiIgSx8R 00:33:47.897 03:14:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:47.897 03:14:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:47.897 03:14:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:47.897 03:14:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:47.897 03:14:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4oQiIgSx8R 00:33:47.897 03:14:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4oQiIgSx8R 00:33:48.153 [2024-05-13 03:14:38.920455] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4oQiIgSx8R': 0100660 00:33:48.153 [2024-05-13 03:14:38.920496] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:48.153 request: 00:33:48.153 { 00:33:48.153 "name": "key0", 00:33:48.153 "path": "/tmp/tmp.4oQiIgSx8R", 00:33:48.153 "method": "keyring_file_add_key", 00:33:48.153 "req_id": 1 00:33:48.153 } 00:33:48.153 Got JSON-RPC error response 00:33:48.153 response: 00:33:48.153 { 00:33:48.153 "code": -1, 00:33:48.153 "message": "Operation not permitted" 00:33:48.153 } 00:33:48.153 03:14:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:48.153 03:14:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:48.153 03:14:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:48.153 03:14:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:48.153 03:14:38 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.4oQiIgSx8R 00:33:48.153 03:14:38 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4oQiIgSx8R 00:33:48.153 03:14:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4oQiIgSx8R 00:33:48.410 03:14:39 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.4oQiIgSx8R 00:33:48.410 03:14:39 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:48.410 03:14:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:48.410 03:14:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:48.410 03:14:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:48.410 03:14:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:48.410 03:14:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:48.667 03:14:39 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:48.667 03:14:39 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:48.667 03:14:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:48.667 03:14:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:48.667 03:14:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:48.667 03:14:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:48.667 03:14:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:48.667 03:14:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:48.667 03:14:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:48.667 03:14:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:48.924 [2024-05-13 03:14:39.662488] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.4oQiIgSx8R': No such file or directory 00:33:48.924 [2024-05-13 03:14:39.662528] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:48.924 [2024-05-13 03:14:39.662560] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:48.924 [2024-05-13 03:14:39.662573] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:48.924 [2024-05-13 03:14:39.662586] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:48.924 request: 00:33:48.924 { 00:33:48.924 "name": "nvme0", 00:33:48.924 "trtype": "tcp", 00:33:48.924 "traddr": "127.0.0.1", 00:33:48.924 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:48.924 "adrfam": "ipv4", 00:33:48.924 "trsvcid": "4420", 00:33:48.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:48.924 "psk": "key0", 00:33:48.924 "method": "bdev_nvme_attach_controller", 00:33:48.924 "req_id": 1 00:33:48.924 } 00:33:48.924 Got JSON-RPC error response 00:33:48.924 response: 00:33:48.924 { 00:33:48.924 "code": -19, 00:33:48.924 "message": "No such device" 00:33:48.924 } 00:33:48.924 03:14:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:48.924 03:14:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:48.924 03:14:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:48.924 03:14:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:48.924 03:14:39 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:48.924 03:14:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:49.182 03:14:39 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lvHRvVGgEf 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:49.182 03:14:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:49.182 03:14:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:49.182 03:14:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:49.182 03:14:39 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:49.182 03:14:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:49.182 03:14:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lvHRvVGgEf 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lvHRvVGgEf 00:33:49.182 03:14:39 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.lvHRvVGgEf 00:33:49.182 03:14:39 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lvHRvVGgEf 00:33:49.182 03:14:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lvHRvVGgEf 00:33:49.440 03:14:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:49.440 03:14:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:50.005 nvme0n1 00:33:50.005 03:14:40 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:50.005 03:14:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:50.005 03:14:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:50.005 03:14:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:50.005 03:14:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.005 03:14:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:50.262 03:14:40 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:50.262 03:14:40 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:50.262 03:14:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:50.262 03:14:41 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:50.262 03:14:41 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:50.262 03:14:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:50.262 03:14:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.262 03:14:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:50.520 03:14:41 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:50.520 03:14:41 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:50.520 03:14:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:50.520 03:14:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:50.520 03:14:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:50.520 03:14:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.520 03:14:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:50.777 03:14:41 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:50.777 03:14:41 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:50.777 03:14:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:51.035 03:14:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:51.035 03:14:41 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:51.035 03:14:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:51.292 03:14:42 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:51.292 03:14:42 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lvHRvVGgEf 00:33:51.292 03:14:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lvHRvVGgEf 00:33:51.550 03:14:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NeSPivgrR7 00:33:51.550 03:14:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NeSPivgrR7 00:33:51.808 03:14:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:51.808 03:14:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:52.066 nvme0n1 00:33:52.066 03:14:42 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:52.066 03:14:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:52.631 03:14:43 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:52.631 "subsystems": [ 00:33:52.631 { 00:33:52.631 "subsystem": "keyring", 00:33:52.631 "config": [ 00:33:52.631 { 00:33:52.631 "method": "keyring_file_add_key", 00:33:52.631 "params": { 00:33:52.631 "name": "key0", 00:33:52.631 "path": "/tmp/tmp.lvHRvVGgEf" 00:33:52.631 } 00:33:52.631 }, 00:33:52.631 { 00:33:52.631 "method": "keyring_file_add_key", 00:33:52.631 "params": { 00:33:52.631 "name": "key1", 00:33:52.631 "path": "/tmp/tmp.NeSPivgrR7" 00:33:52.631 } 00:33:52.631 } 00:33:52.631 ] 00:33:52.631 }, 00:33:52.631 { 00:33:52.631 "subsystem": "iobuf", 00:33:52.631 "config": [ 00:33:52.631 { 00:33:52.631 "method": "iobuf_set_options", 00:33:52.631 "params": { 00:33:52.631 "small_pool_count": 8192, 00:33:52.631 "large_pool_count": 1024, 00:33:52.631 "small_bufsize": 8192, 00:33:52.631 "large_bufsize": 135168 00:33:52.631 } 00:33:52.631 } 00:33:52.631 ] 00:33:52.631 }, 00:33:52.631 { 00:33:52.631 "subsystem": "sock", 00:33:52.631 "config": [ 00:33:52.631 { 00:33:52.631 "method": "sock_impl_set_options", 00:33:52.631 "params": { 00:33:52.631 "impl_name": "posix", 00:33:52.631 "recv_buf_size": 2097152, 00:33:52.631 "send_buf_size": 2097152, 00:33:52.631 "enable_recv_pipe": true, 00:33:52.631 "enable_quickack": false, 00:33:52.631 "enable_placement_id": 0, 00:33:52.631 "enable_zerocopy_send_server": true, 00:33:52.631 "enable_zerocopy_send_client": false, 00:33:52.631 "zerocopy_threshold": 0, 00:33:52.631 "tls_version": 0, 00:33:52.631 "enable_ktls": false 00:33:52.631 } 00:33:52.631 }, 00:33:52.631 { 00:33:52.631 "method": "sock_impl_set_options", 00:33:52.631 "params": { 00:33:52.631 "impl_name": "ssl", 00:33:52.631 "recv_buf_size": 4096, 00:33:52.631 "send_buf_size": 4096, 00:33:52.631 "enable_recv_pipe": true, 00:33:52.631 "enable_quickack": false, 00:33:52.631 "enable_placement_id": 0, 00:33:52.631 "enable_zerocopy_send_server": true, 00:33:52.631 "enable_zerocopy_send_client": false, 00:33:52.631 "zerocopy_threshold": 0, 00:33:52.631 "tls_version": 0, 00:33:52.631 "enable_ktls": false 00:33:52.631 } 00:33:52.631 } 00:33:52.631 ] 00:33:52.631 }, 00:33:52.631 { 00:33:52.631 "subsystem": "vmd", 00:33:52.631 "config": [] 00:33:52.631 }, 00:33:52.631 { 00:33:52.631 "subsystem": "accel", 00:33:52.631 "config": [ 00:33:52.632 { 00:33:52.632 "method": "accel_set_options", 00:33:52.632 "params": { 00:33:52.632 "small_cache_size": 128, 00:33:52.632 "large_cache_size": 16, 00:33:52.632 "task_count": 2048, 00:33:52.632 "sequence_count": 2048, 00:33:52.632 "buf_count": 2048 00:33:52.632 } 00:33:52.632 } 00:33:52.632 ] 00:33:52.632 }, 00:33:52.632 { 00:33:52.632 "subsystem": "bdev", 00:33:52.632 "config": [ 00:33:52.632 { 00:33:52.632 "method": "bdev_set_options", 00:33:52.632 "params": { 00:33:52.632 "bdev_io_pool_size": 65535, 00:33:52.632 "bdev_io_cache_size": 256, 00:33:52.632 "bdev_auto_examine": true, 00:33:52.632 "iobuf_small_cache_size": 128, 00:33:52.632 "iobuf_large_cache_size": 16 00:33:52.632 } 00:33:52.632 }, 00:33:52.632 { 00:33:52.632 "method": "bdev_raid_set_options", 00:33:52.632 "params": { 00:33:52.632 "process_window_size_kb": 1024 00:33:52.632 } 00:33:52.632 }, 00:33:52.632 { 00:33:52.632 "method": "bdev_iscsi_set_options", 00:33:52.632 "params": { 00:33:52.632 "timeout_sec": 30 00:33:52.632 } 00:33:52.632 }, 00:33:52.632 { 00:33:52.632 "method": "bdev_nvme_set_options", 00:33:52.632 "params": { 00:33:52.632 "action_on_timeout": "none", 00:33:52.632 "timeout_us": 0, 00:33:52.632 "timeout_admin_us": 0, 00:33:52.632 "keep_alive_timeout_ms": 10000, 00:33:52.632 "arbitration_burst": 0, 00:33:52.632 "low_priority_weight": 0, 00:33:52.632 "medium_priority_weight": 0, 00:33:52.632 "high_priority_weight": 0, 00:33:52.632 "nvme_adminq_poll_period_us": 10000, 00:33:52.632 "nvme_ioq_poll_period_us": 0, 00:33:52.632 "io_queue_requests": 512, 00:33:52.632 "delay_cmd_submit": true, 00:33:52.632 "transport_retry_count": 4, 00:33:52.632 "bdev_retry_count": 3, 00:33:52.632 "transport_ack_timeout": 0, 00:33:52.632 "ctrlr_loss_timeout_sec": 0, 00:33:52.632 "reconnect_delay_sec": 0, 00:33:52.632 "fast_io_fail_timeout_sec": 0, 00:33:52.632 "disable_auto_failback": false, 00:33:52.632 "generate_uuids": false, 00:33:52.632 "transport_tos": 0, 00:33:52.632 "nvme_error_stat": false, 00:33:52.632 "rdma_srq_size": 0, 00:33:52.632 "io_path_stat": false, 00:33:52.632 "allow_accel_sequence": false, 00:33:52.632 "rdma_max_cq_size": 0, 00:33:52.632 "rdma_cm_event_timeout_ms": 0, 00:33:52.632 "dhchap_digests": [ 00:33:52.632 "sha256", 00:33:52.632 "sha384", 00:33:52.632 "sha512" 00:33:52.632 ], 00:33:52.632 "dhchap_dhgroups": [ 00:33:52.632 "null", 00:33:52.632 "ffdhe2048", 00:33:52.632 "ffdhe3072", 00:33:52.632 "ffdhe4096", 00:33:52.632 "ffdhe6144", 00:33:52.632 "ffdhe8192" 00:33:52.632 ] 00:33:52.632 } 00:33:52.632 }, 00:33:52.632 { 00:33:52.632 "method": "bdev_nvme_attach_controller", 00:33:52.632 "params": { 00:33:52.632 "name": "nvme0", 00:33:52.632 "trtype": "TCP", 00:33:52.632 "adrfam": "IPv4", 00:33:52.632 "traddr": "127.0.0.1", 00:33:52.632 "trsvcid": "4420", 00:33:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.632 "prchk_reftag": false, 00:33:52.632 "prchk_guard": false, 00:33:52.632 "ctrlr_loss_timeout_sec": 0, 00:33:52.632 "reconnect_delay_sec": 0, 00:33:52.632 "fast_io_fail_timeout_sec": 0, 00:33:52.632 "psk": "key0", 00:33:52.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.632 "hdgst": false, 00:33:52.632 "ddgst": false 00:33:52.632 } 00:33:52.632 }, 00:33:52.632 { 00:33:52.632 "method": "bdev_nvme_set_hotplug", 00:33:52.632 "params": { 00:33:52.632 "period_us": 100000, 00:33:52.632 "enable": false 00:33:52.632 } 00:33:52.632 }, 00:33:52.632 { 00:33:52.632 "method": "bdev_wait_for_examine" 00:33:52.632 } 00:33:52.632 ] 00:33:52.632 }, 00:33:52.632 { 00:33:52.632 "subsystem": "nbd", 00:33:52.632 "config": [] 00:33:52.632 } 00:33:52.632 ] 00:33:52.632 }' 00:33:52.632 03:14:43 keyring_file -- keyring/file.sh@114 -- # killprocess 512383 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 512383 ']' 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@950 -- # kill -0 512383 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@951 -- # uname 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 512383 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 512383' 00:33:52.632 killing process with pid 512383 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@965 -- # kill 512383 00:33:52.632 Received shutdown signal, test time was about 1.000000 seconds 00:33:52.632 00:33:52.632 Latency(us) 00:33:52.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.632 =================================================================================================================== 00:33:52.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:52.632 03:14:43 keyring_file -- common/autotest_common.sh@970 -- # wait 512383 00:33:52.891 03:14:43 keyring_file -- keyring/file.sh@117 -- # bperfpid=513722 00:33:52.891 03:14:43 keyring_file -- keyring/file.sh@119 -- # waitforlisten 513722 /var/tmp/bperf.sock 00:33:52.891 03:14:43 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 513722 ']' 00:33:52.891 03:14:43 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:52.891 03:14:43 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:52.891 03:14:43 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:52.891 03:14:43 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:52.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:52.891 03:14:43 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:52.891 "subsystems": [ 00:33:52.891 { 00:33:52.891 "subsystem": "keyring", 00:33:52.891 "config": [ 00:33:52.891 { 00:33:52.891 "method": "keyring_file_add_key", 00:33:52.891 "params": { 00:33:52.891 "name": "key0", 00:33:52.891 "path": "/tmp/tmp.lvHRvVGgEf" 00:33:52.891 } 00:33:52.891 }, 00:33:52.891 { 00:33:52.891 "method": "keyring_file_add_key", 00:33:52.891 "params": { 00:33:52.891 "name": "key1", 00:33:52.891 "path": "/tmp/tmp.NeSPivgrR7" 00:33:52.891 } 00:33:52.891 } 00:33:52.891 ] 00:33:52.891 }, 00:33:52.891 { 00:33:52.891 "subsystem": "iobuf", 00:33:52.891 "config": [ 00:33:52.891 { 00:33:52.891 "method": "iobuf_set_options", 00:33:52.891 "params": { 00:33:52.891 "small_pool_count": 8192, 00:33:52.891 "large_pool_count": 1024, 00:33:52.891 "small_bufsize": 8192, 00:33:52.891 "large_bufsize": 135168 00:33:52.891 } 00:33:52.891 } 00:33:52.891 ] 00:33:52.891 }, 00:33:52.891 { 00:33:52.891 "subsystem": "sock", 00:33:52.891 "config": [ 00:33:52.891 { 00:33:52.891 "method": "sock_impl_set_options", 00:33:52.891 "params": { 00:33:52.891 "impl_name": "posix", 00:33:52.891 "recv_buf_size": 2097152, 00:33:52.891 "send_buf_size": 2097152, 00:33:52.891 "enable_recv_pipe": true, 00:33:52.891 "enable_quickack": false, 00:33:52.891 "enable_placement_id": 0, 00:33:52.891 "enable_zerocopy_send_server": true, 00:33:52.891 "enable_zerocopy_send_client": false, 00:33:52.891 "zerocopy_threshold": 0, 00:33:52.891 "tls_version": 0, 00:33:52.891 "enable_ktls": false 00:33:52.891 } 00:33:52.891 }, 00:33:52.891 { 00:33:52.891 "method": "sock_impl_set_options", 00:33:52.891 "params": { 00:33:52.891 "impl_name": "ssl", 00:33:52.891 "recv_buf_size": 4096, 00:33:52.891 "send_buf_size": 4096, 00:33:52.891 "enable_recv_pipe": true, 00:33:52.891 "enable_quickack": false, 00:33:52.891 "enable_placement_id": 0, 00:33:52.891 "enable_zerocopy_send_server": true, 00:33:52.891 "enable_zerocopy_send_client": false, 00:33:52.891 "zerocopy_threshold": 0, 00:33:52.891 "tls_version": 0, 00:33:52.891 "enable_ktls": false 00:33:52.891 } 00:33:52.891 } 00:33:52.891 ] 00:33:52.891 }, 00:33:52.891 { 00:33:52.891 "subsystem": "vmd", 00:33:52.891 "config": [] 00:33:52.891 }, 00:33:52.891 { 00:33:52.891 "subsystem": "accel", 00:33:52.891 "config": [ 00:33:52.891 { 00:33:52.891 "method": "accel_set_options", 00:33:52.891 "params": { 00:33:52.891 "small_cache_size": 128, 00:33:52.891 "large_cache_size": 16, 00:33:52.891 "task_count": 2048, 00:33:52.891 "sequence_count": 2048, 00:33:52.891 "buf_count": 2048 00:33:52.891 } 00:33:52.891 } 00:33:52.891 ] 00:33:52.891 }, 00:33:52.891 { 00:33:52.891 "subsystem": "bdev", 00:33:52.891 "config": [ 00:33:52.892 { 00:33:52.892 "method": "bdev_set_options", 00:33:52.892 "params": { 00:33:52.892 "bdev_io_pool_size": 65535, 00:33:52.892 "bdev_io_cache_size": 256, 00:33:52.892 "bdev_auto_examine": true, 00:33:52.892 "iobuf_small_cache_size": 128, 00:33:52.892 "iobuf_large_cache_size": 16 00:33:52.892 } 00:33:52.892 }, 00:33:52.892 { 00:33:52.892 "method": "bdev_raid_set_options", 00:33:52.892 "params": { 00:33:52.892 "process_window_size_kb": 1024 00:33:52.892 } 00:33:52.892 }, 00:33:52.892 { 00:33:52.892 "method": "bdev_iscsi_set_options", 00:33:52.892 "params": { 00:33:52.892 "timeout_sec": 30 00:33:52.892 } 00:33:52.892 }, 00:33:52.892 { 00:33:52.892 "method": "bdev_nvme_set_options", 00:33:52.892 "params": { 00:33:52.892 "action_on_timeout": "none", 00:33:52.892 "timeout_us": 0, 00:33:52.892 "timeout_admin_us": 0, 00:33:52.892 "keep_alive_timeout_ms": 10000, 00:33:52.892 "arbitration_burst": 0, 00:33:52.892 "low_priority_weight": 0, 00:33:52.892 "medium_priority_weight": 0, 00:33:52.892 "high_priority_weight": 0, 00:33:52.892 "nvme_adminq_poll_period_us": 10000, 00:33:52.892 "nvme_ioq_poll_period_us": 0, 00:33:52.892 "io_queue_requests": 512, 00:33:52.892 "delay_cmd_submit": true, 00:33:52.892 "transport_retry_count": 4, 00:33:52.892 "bdev_retry_count": 3, 00:33:52.892 "transport_ack_timeout": 0, 00:33:52.892 "ctrlr_loss_timeout_sec": 0, 00:33:52.892 "reconnect_delay_sec": 0, 00:33:52.892 "fast_io_fail_timeout_sec": 0, 00:33:52.892 "disable_auto_failback": false, 00:33:52.892 "generate_uuids": false, 00:33:52.892 "transport_tos": 0, 00:33:52.892 "nvme_error_stat": false, 00:33:52.892 "rdma_srq_size": 0, 00:33:52.892 "io_path_stat": false, 00:33:52.892 "allow_accel_sequence": false, 00:33:52.892 "rdma_max_cq_size": 0, 00:33:52.892 "rdma_cm_event_timeout_ms": 0, 00:33:52.892 "dhchap_digests": [ 00:33:52.892 "sha256", 00:33:52.892 "sha384", 00:33:52.892 "sha512" 00:33:52.892 ], 00:33:52.892 "dhchap_dhgroups": [ 00:33:52.892 "null", 00:33:52.892 "ffdhe2048", 00:33:52.892 "ffdhe3072", 00:33:52.892 "ffdhe4096", 00:33:52.892 "ffdhe6144", 00:33:52.892 "ffdhe8192" 00:33:52.892 ] 00:33:52.892 } 00:33:52.892 }, 00:33:52.892 { 00:33:52.892 "method": "bdev_nvme_attach_controller", 00:33:52.892 "params": { 00:33:52.892 "name": "nvme0", 00:33:52.892 "trtype": "TCP", 00:33:52.892 "adrfam": "IPv4", 00:33:52.892 "traddr": "127.0.0.1", 00:33:52.892 "trsvcid": "4420", 00:33:52.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.892 "prchk_reftag": false, 00:33:52.892 "prchk_guard": false, 00:33:52.892 "ctrlr_loss_timeout_sec": 0, 00:33:52.892 "reconnect_delay_sec": 0, 00:33:52.892 "fast_io_fail_timeout_sec": 0, 00:33:52.892 "psk": "key0", 00:33:52.892 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.892 "hdgst": false, 00:33:52.892 "ddgst": false 00:33:52.892 } 00:33:52.892 }, 00:33:52.892 { 00:33:52.892 "method": "bdev_nvme_set_hotplug", 00:33:52.892 "params": { 00:33:52.892 "period_us": 100000, 00:33:52.892 "enable": false 00:33:52.892 } 00:33:52.892 }, 00:33:52.892 { 00:33:52.892 "method": "bdev_wait_for_examine" 00:33:52.892 } 00:33:52.892 ] 00:33:52.892 }, 00:33:52.892 { 00:33:52.892 "subsystem": "nbd", 00:33:52.892 "config": [] 00:33:52.892 } 00:33:52.892 ] 00:33:52.892 }' 00:33:52.892 03:14:43 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:52.892 03:14:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:52.892 [2024-05-13 03:14:43.476526] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:33:52.892 [2024-05-13 03:14:43.476607] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513722 ] 00:33:52.892 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.892 [2024-05-13 03:14:43.506224] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:52.892 [2024-05-13 03:14:43.537465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.892 [2024-05-13 03:14:43.626535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.149 [2024-05-13 03:14:43.804998] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:53.714 03:14:44 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.714 03:14:44 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:33:53.714 03:14:44 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:53.714 03:14:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:53.714 03:14:44 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:53.970 03:14:44 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:53.971 03:14:44 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:53.971 03:14:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:53.971 03:14:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:53.971 03:14:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:53.971 03:14:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:53.971 03:14:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:54.228 03:14:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:54.228 03:14:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:54.228 03:14:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:54.228 03:14:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:54.228 03:14:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:54.228 03:14:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:54.228 03:14:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:54.485 03:14:45 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:54.485 03:14:45 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:54.485 03:14:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:54.485 03:14:45 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:54.743 03:14:45 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:54.743 03:14:45 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:54.743 03:14:45 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lvHRvVGgEf /tmp/tmp.NeSPivgrR7 00:33:54.743 03:14:45 keyring_file -- keyring/file.sh@20 -- # killprocess 513722 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 513722 ']' 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@950 -- # kill -0 513722 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@951 -- # uname 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 513722 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 513722' 00:33:54.743 killing process with pid 513722 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@965 -- # kill 513722 00:33:54.743 Received shutdown signal, test time was about 1.000000 seconds 00:33:54.743 00:33:54.743 Latency(us) 00:33:54.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.743 =================================================================================================================== 00:33:54.743 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:54.743 03:14:45 keyring_file -- common/autotest_common.sh@970 -- # wait 513722 00:33:55.000 03:14:45 keyring_file -- keyring/file.sh@21 -- # killprocess 512268 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 512268 ']' 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@950 -- # kill -0 512268 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@951 -- # uname 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 512268 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 512268' 00:33:55.000 killing process with pid 512268 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@965 -- # kill 512268 00:33:55.000 [2024-05-13 03:14:45.713927] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:55.000 [2024-05-13 03:14:45.713986] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:55.000 03:14:45 keyring_file -- common/autotest_common.sh@970 -- # wait 512268 00:33:55.566 00:33:55.566 real 0m14.001s 00:33:55.566 user 0m34.878s 00:33:55.566 sys 0m2.987s 00:33:55.566 03:14:46 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:55.566 03:14:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:55.566 ************************************ 00:33:55.566 END TEST keyring_file 00:33:55.566 ************************************ 00:33:55.566 03:14:46 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:33:55.566 03:14:46 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:33:55.566 03:14:46 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:33:55.566 03:14:46 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:33:55.566 03:14:46 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:33:55.566 03:14:46 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:33:55.566 03:14:46 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:33:55.566 03:14:46 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:33:55.567 03:14:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:55.567 03:14:46 -- common/autotest_common.sh@10 -- # set +x 00:33:55.567 03:14:46 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:33:55.567 03:14:46 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:33:55.567 03:14:46 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:33:55.567 03:14:46 -- common/autotest_common.sh@10 -- # set +x 00:33:57.464 INFO: APP EXITING 00:33:57.464 INFO: killing all VMs 00:33:57.464 INFO: killing vhost app 00:33:57.464 INFO: EXIT DONE 00:33:58.397 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:33:58.397 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:58.397 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:58.397 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:58.397 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:58.397 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:58.397 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:58.397 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:58.397 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:58.397 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:58.397 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:58.397 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:58.397 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:58.397 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:58.397 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:58.397 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:58.397 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:59.773 Cleaning 00:33:59.773 Removing: /var/run/dpdk/spdk0/config 00:33:59.773 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:59.773 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:59.773 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:59.773 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:59.773 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:59.773 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:59.773 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:59.773 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:59.773 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:59.773 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:59.773 Removing: /var/run/dpdk/spdk1/config 00:33:59.773 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:59.773 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:59.773 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:59.773 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:59.773 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:59.773 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:59.773 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:59.773 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:59.773 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:59.773 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:59.773 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:59.773 Removing: /var/run/dpdk/spdk2/config 00:33:59.773 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:59.773 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:59.773 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:59.773 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:59.773 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:59.773 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:59.773 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:59.773 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:59.773 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:59.773 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:59.773 Removing: /var/run/dpdk/spdk3/config 00:33:59.773 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:59.773 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:59.773 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:59.773 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:59.773 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:59.773 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:59.773 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:59.773 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:59.773 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:59.773 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:59.773 Removing: /var/run/dpdk/spdk4/config 00:33:59.773 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:59.773 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:59.773 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:59.773 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:59.773 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:59.774 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:59.774 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:59.774 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:59.774 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:59.774 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:59.774 Removing: /dev/shm/bdev_svc_trace.1 00:33:59.774 Removing: /dev/shm/nvmf_trace.0 00:33:59.774 Removing: /dev/shm/spdk_tgt_trace.pid222158 00:33:59.774 Removing: /var/run/dpdk/spdk0 00:33:59.774 Removing: /var/run/dpdk/spdk1 00:33:59.774 Removing: /var/run/dpdk/spdk2 00:33:59.774 Removing: /var/run/dpdk/spdk3 00:33:59.774 Removing: /var/run/dpdk/spdk4 00:33:59.774 Removing: /var/run/dpdk/spdk_pid220614 00:33:59.774 Removing: /var/run/dpdk/spdk_pid221349 00:33:59.774 Removing: /var/run/dpdk/spdk_pid222158 00:33:59.774 Removing: /var/run/dpdk/spdk_pid222598 00:33:59.774 Removing: /var/run/dpdk/spdk_pid223292 00:33:59.774 Removing: /var/run/dpdk/spdk_pid223432 00:33:59.774 Removing: /var/run/dpdk/spdk_pid224146 00:33:59.774 Removing: /var/run/dpdk/spdk_pid224162 00:33:59.774 Removing: /var/run/dpdk/spdk_pid224400 00:33:59.774 Removing: /var/run/dpdk/spdk_pid225709 00:33:59.774 Removing: /var/run/dpdk/spdk_pid226642 00:33:59.774 Removing: /var/run/dpdk/spdk_pid226942 00:33:59.774 Removing: /var/run/dpdk/spdk_pid227127 00:33:59.774 Removing: /var/run/dpdk/spdk_pid227334 00:33:59.774 Removing: /var/run/dpdk/spdk_pid227524 00:33:59.774 Removing: /var/run/dpdk/spdk_pid227687 00:33:59.774 Removing: /var/run/dpdk/spdk_pid227843 00:33:59.774 Removing: /var/run/dpdk/spdk_pid228027 00:33:59.774 Removing: /var/run/dpdk/spdk_pid228592 00:33:59.774 Removing: /var/run/dpdk/spdk_pid230923 00:33:59.774 Removing: /var/run/dpdk/spdk_pid231109 00:33:59.774 Removing: /var/run/dpdk/spdk_pid231279 00:33:59.774 Removing: /var/run/dpdk/spdk_pid231285 00:33:59.774 Removing: /var/run/dpdk/spdk_pid231712 00:33:59.774 Removing: /var/run/dpdk/spdk_pid231720 00:33:59.774 Removing: /var/run/dpdk/spdk_pid232031 00:33:59.774 Removing: /var/run/dpdk/spdk_pid232154 00:33:59.774 Removing: /var/run/dpdk/spdk_pid232327 00:33:59.774 Removing: /var/run/dpdk/spdk_pid232384 00:33:59.774 Removing: /var/run/dpdk/spdk_pid232622 00:33:59.774 Removing: /var/run/dpdk/spdk_pid232632 00:33:59.774 Removing: /var/run/dpdk/spdk_pid233005 00:33:59.774 Removing: /var/run/dpdk/spdk_pid233159 00:33:59.774 Removing: /var/run/dpdk/spdk_pid233522 00:33:59.774 Removing: /var/run/dpdk/spdk_pid233679 00:33:59.774 Removing: /var/run/dpdk/spdk_pid233780 00:33:59.774 Removing: /var/run/dpdk/spdk_pid233844 00:33:59.774 Removing: /var/run/dpdk/spdk_pid234122 00:33:59.774 Removing: /var/run/dpdk/spdk_pid234274 00:33:59.774 Removing: /var/run/dpdk/spdk_pid234502 00:33:59.774 Removing: /var/run/dpdk/spdk_pid235079 00:33:59.774 Removing: /var/run/dpdk/spdk_pid235368 00:33:59.774 Removing: /var/run/dpdk/spdk_pid235528 00:33:59.774 Removing: /var/run/dpdk/spdk_pid235679 00:33:59.774 Removing: /var/run/dpdk/spdk_pid235957 00:33:59.774 Removing: /var/run/dpdk/spdk_pid236117 00:33:59.774 Removing: /var/run/dpdk/spdk_pid236276 00:33:59.774 Removing: /var/run/dpdk/spdk_pid236510 00:33:59.774 Removing: /var/run/dpdk/spdk_pid236708 00:33:59.774 Removing: /var/run/dpdk/spdk_pid236864 00:33:59.774 Removing: /var/run/dpdk/spdk_pid237018 00:33:59.774 Removing: /var/run/dpdk/spdk_pid237290 00:33:59.774 Removing: /var/run/dpdk/spdk_pid237452 00:33:59.774 Removing: /var/run/dpdk/spdk_pid237613 00:33:59.774 Removing: /var/run/dpdk/spdk_pid237796 00:33:59.774 Removing: /var/run/dpdk/spdk_pid238045 00:33:59.774 Removing: /var/run/dpdk/spdk_pid238205 00:33:59.774 Removing: /var/run/dpdk/spdk_pid238388 00:33:59.774 Removing: /var/run/dpdk/spdk_pid238592 00:33:59.774 Removing: /var/run/dpdk/spdk_pid240657 00:33:59.774 Removing: /var/run/dpdk/spdk_pid293142 00:33:59.774 Removing: /var/run/dpdk/spdk_pid296151 00:33:59.774 Removing: /var/run/dpdk/spdk_pid303111 00:33:59.774 Removing: /var/run/dpdk/spdk_pid306406 00:33:59.774 Removing: /var/run/dpdk/spdk_pid308886 00:33:59.774 Removing: /var/run/dpdk/spdk_pid309409 00:33:59.774 Removing: /var/run/dpdk/spdk_pid316526 00:33:59.774 Removing: /var/run/dpdk/spdk_pid316530 00:33:59.774 Removing: /var/run/dpdk/spdk_pid317188 00:33:59.774 Removing: /var/run/dpdk/spdk_pid317728 00:33:59.774 Removing: /var/run/dpdk/spdk_pid318381 00:33:59.774 Removing: /var/run/dpdk/spdk_pid318776 00:33:59.774 Removing: /var/run/dpdk/spdk_pid318789 00:33:59.774 Removing: /var/run/dpdk/spdk_pid319042 00:33:59.774 Removing: /var/run/dpdk/spdk_pid319128 00:33:59.774 Removing: /var/run/dpdk/spdk_pid319181 00:33:59.774 Removing: /var/run/dpdk/spdk_pid319726 00:33:59.774 Removing: /var/run/dpdk/spdk_pid320376 00:33:59.774 Removing: /var/run/dpdk/spdk_pid321034 00:33:59.774 Removing: /var/run/dpdk/spdk_pid321435 00:33:59.774 Removing: /var/run/dpdk/spdk_pid321438 00:33:59.774 Removing: /var/run/dpdk/spdk_pid321687 00:33:59.774 Removing: /var/run/dpdk/spdk_pid322455 00:33:59.774 Removing: /var/run/dpdk/spdk_pid323259 00:34:00.033 Removing: /var/run/dpdk/spdk_pid329143 00:34:00.033 Removing: /var/run/dpdk/spdk_pid329414 00:34:00.033 Removing: /var/run/dpdk/spdk_pid331917 00:34:00.033 Removing: /var/run/dpdk/spdk_pid335606 00:34:00.033 Removing: /var/run/dpdk/spdk_pid337652 00:34:00.033 Removing: /var/run/dpdk/spdk_pid344016 00:34:00.033 Removing: /var/run/dpdk/spdk_pid349096 00:34:00.033 Removing: /var/run/dpdk/spdk_pid350400 00:34:00.033 Removing: /var/run/dpdk/spdk_pid351064 00:34:00.033 Removing: /var/run/dpdk/spdk_pid361730 00:34:00.033 Removing: /var/run/dpdk/spdk_pid363830 00:34:00.033 Removing: /var/run/dpdk/spdk_pid366731 00:34:00.033 Removing: /var/run/dpdk/spdk_pid367796 00:34:00.033 Removing: /var/run/dpdk/spdk_pid369116 00:34:00.033 Removing: /var/run/dpdk/spdk_pid369243 00:34:00.033 Removing: /var/run/dpdk/spdk_pid369354 00:34:00.033 Removing: /var/run/dpdk/spdk_pid369405 00:34:00.033 Removing: /var/run/dpdk/spdk_pid369840 00:34:00.033 Removing: /var/run/dpdk/spdk_pid371152 00:34:00.033 Removing: /var/run/dpdk/spdk_pid371759 00:34:00.033 Removing: /var/run/dpdk/spdk_pid372189 00:34:00.033 Removing: /var/run/dpdk/spdk_pid373802 00:34:00.033 Removing: /var/run/dpdk/spdk_pid374172 00:34:00.033 Removing: /var/run/dpdk/spdk_pid374674 00:34:00.033 Removing: /var/run/dpdk/spdk_pid377180 00:34:00.033 Removing: /var/run/dpdk/spdk_pid380481 00:34:00.033 Removing: /var/run/dpdk/spdk_pid383976 00:34:00.033 Removing: /var/run/dpdk/spdk_pid406912 00:34:00.033 Removing: /var/run/dpdk/spdk_pid409549 00:34:00.033 Removing: /var/run/dpdk/spdk_pid413319 00:34:00.033 Removing: /var/run/dpdk/spdk_pid414263 00:34:00.033 Removing: /var/run/dpdk/spdk_pid415360 00:34:00.033 Removing: /var/run/dpdk/spdk_pid417909 00:34:00.033 Removing: /var/run/dpdk/spdk_pid420372 00:34:00.033 Removing: /var/run/dpdk/spdk_pid424967 00:34:00.033 Removing: /var/run/dpdk/spdk_pid425084 00:34:00.033 Removing: /var/run/dpdk/spdk_pid427852 00:34:00.033 Removing: /var/run/dpdk/spdk_pid427987 00:34:00.033 Removing: /var/run/dpdk/spdk_pid428122 00:34:00.033 Removing: /var/run/dpdk/spdk_pid428395 00:34:00.033 Removing: /var/run/dpdk/spdk_pid428400 00:34:00.033 Removing: /var/run/dpdk/spdk_pid429473 00:34:00.033 Removing: /var/run/dpdk/spdk_pid430765 00:34:00.033 Removing: /var/run/dpdk/spdk_pid431944 00:34:00.033 Removing: /var/run/dpdk/spdk_pid433127 00:34:00.033 Removing: /var/run/dpdk/spdk_pid434306 00:34:00.034 Removing: /var/run/dpdk/spdk_pid435485 00:34:00.034 Removing: /var/run/dpdk/spdk_pid439152 00:34:00.034 Removing: /var/run/dpdk/spdk_pid439482 00:34:00.034 Removing: /var/run/dpdk/spdk_pid440617 00:34:00.034 Removing: /var/run/dpdk/spdk_pid441096 00:34:00.034 Removing: /var/run/dpdk/spdk_pid444562 00:34:00.034 Removing: /var/run/dpdk/spdk_pid446510 00:34:00.034 Removing: /var/run/dpdk/spdk_pid449924 00:34:00.034 Removing: /var/run/dpdk/spdk_pid453853 00:34:00.034 Removing: /var/run/dpdk/spdk_pid460075 00:34:00.034 Removing: /var/run/dpdk/spdk_pid464404 00:34:00.034 Removing: /var/run/dpdk/spdk_pid464406 00:34:00.034 Removing: /var/run/dpdk/spdk_pid476200 00:34:00.034 Removing: /var/run/dpdk/spdk_pid476606 00:34:00.034 Removing: /var/run/dpdk/spdk_pid477012 00:34:00.034 Removing: /var/run/dpdk/spdk_pid477540 00:34:00.034 Removing: /var/run/dpdk/spdk_pid478049 00:34:00.034 Removing: /var/run/dpdk/spdk_pid478524 00:34:00.034 Removing: /var/run/dpdk/spdk_pid478939 00:34:00.034 Removing: /var/run/dpdk/spdk_pid479343 00:34:00.034 Removing: /var/run/dpdk/spdk_pid481837 00:34:00.034 Removing: /var/run/dpdk/spdk_pid481975 00:34:00.034 Removing: /var/run/dpdk/spdk_pid485848 00:34:00.034 Removing: /var/run/dpdk/spdk_pid485925 00:34:00.034 Removing: /var/run/dpdk/spdk_pid488157 00:34:00.034 Removing: /var/run/dpdk/spdk_pid493063 00:34:00.034 Removing: /var/run/dpdk/spdk_pid493072 00:34:00.034 Removing: /var/run/dpdk/spdk_pid495965 00:34:00.034 Removing: /var/run/dpdk/spdk_pid497359 00:34:00.034 Removing: /var/run/dpdk/spdk_pid498757 00:34:00.034 Removing: /var/run/dpdk/spdk_pid499502 00:34:00.034 Removing: /var/run/dpdk/spdk_pid500909 00:34:00.034 Removing: /var/run/dpdk/spdk_pid501696 00:34:00.034 Removing: /var/run/dpdk/spdk_pid506963 00:34:00.034 Removing: /var/run/dpdk/spdk_pid507325 00:34:00.034 Removing: /var/run/dpdk/spdk_pid507717 00:34:00.034 Removing: /var/run/dpdk/spdk_pid509214 00:34:00.034 Removing: /var/run/dpdk/spdk_pid509545 00:34:00.034 Removing: /var/run/dpdk/spdk_pid509943 00:34:00.034 Removing: /var/run/dpdk/spdk_pid512268 00:34:00.034 Removing: /var/run/dpdk/spdk_pid512383 00:34:00.034 Removing: /var/run/dpdk/spdk_pid513722 00:34:00.034 Clean 00:34:00.034 03:14:50 -- common/autotest_common.sh@1447 -- # return 0 00:34:00.034 03:14:50 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:34:00.034 03:14:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.034 03:14:50 -- common/autotest_common.sh@10 -- # set +x 00:34:00.292 03:14:50 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:34:00.292 03:14:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.292 03:14:50 -- common/autotest_common.sh@10 -- # set +x 00:34:00.292 03:14:50 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:00.292 03:14:50 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:00.292 03:14:50 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:00.292 03:14:50 -- spdk/autotest.sh@389 -- # hash lcov 00:34:00.292 03:14:50 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:00.292 03:14:50 -- spdk/autotest.sh@391 -- # hostname 00:34:00.292 03:14:50 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:00.292 geninfo: WARNING: invalid characters removed from testname! 00:34:38.993 03:15:23 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:38.993 03:15:27 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:40.368 03:15:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:43.690 03:15:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:46.230 03:15:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:48.770 03:15:39 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:52.066 03:15:42 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:52.066 03:15:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.066 03:15:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:52.066 03:15:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.066 03:15:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.066 03:15:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.066 03:15:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.066 03:15:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.066 03:15:42 -- paths/export.sh@5 -- $ export PATH 00:34:52.067 03:15:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.067 03:15:42 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:52.067 03:15:42 -- common/autobuild_common.sh@437 -- $ date +%s 00:34:52.067 03:15:42 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715562942.XXXXXX 00:34:52.067 03:15:42 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715562942.xkhuTQ 00:34:52.067 03:15:42 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:34:52.067 03:15:42 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:34:52.067 03:15:42 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:34:52.067 03:15:42 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:34:52.067 03:15:42 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:52.067 03:15:42 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:52.067 03:15:42 -- common/autobuild_common.sh@453 -- $ get_config_params 00:34:52.067 03:15:42 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:34:52.067 03:15:42 -- common/autotest_common.sh@10 -- $ set +x 00:34:52.067 03:15:42 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:34:52.067 03:15:42 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:34:52.067 03:15:42 -- pm/common@17 -- $ local monitor 00:34:52.067 03:15:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.067 03:15:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.067 03:15:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.067 03:15:42 -- pm/common@21 -- $ date +%s 00:34:52.067 03:15:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.067 03:15:42 -- pm/common@21 -- $ date +%s 00:34:52.067 03:15:42 -- pm/common@25 -- $ sleep 1 00:34:52.067 03:15:42 -- pm/common@21 -- $ date +%s 00:34:52.067 03:15:42 -- pm/common@21 -- $ date +%s 00:34:52.067 03:15:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715562942 00:34:52.067 03:15:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715562942 00:34:52.067 03:15:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715562942 00:34:52.067 03:15:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715562942 00:34:52.067 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715562942_collect-vmstat.pm.log 00:34:52.067 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715562942_collect-cpu-load.pm.log 00:34:52.067 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715562942_collect-cpu-temp.pm.log 00:34:52.067 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715562942_collect-bmc-pm.bmc.pm.log 00:34:52.636 03:15:43 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:34:52.636 03:15:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:34:52.636 03:15:43 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:52.636 03:15:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:52.636 03:15:43 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:52.636 03:15:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:52.636 03:15:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:52.636 03:15:43 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:52.636 03:15:43 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:52.636 03:15:43 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:52.896 03:15:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:52.896 03:15:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:52.896 03:15:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:52.896 03:15:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:52.896 03:15:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.896 03:15:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:52.896 03:15:43 -- pm/common@44 -- $ pid=524663 00:34:52.896 03:15:43 -- pm/common@50 -- $ kill -TERM 524663 00:34:52.896 03:15:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.896 03:15:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:52.896 03:15:43 -- pm/common@44 -- $ pid=524665 00:34:52.896 03:15:43 -- pm/common@50 -- $ kill -TERM 524665 00:34:52.896 03:15:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.896 03:15:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:52.896 03:15:43 -- pm/common@44 -- $ pid=524667 00:34:52.896 03:15:43 -- pm/common@50 -- $ kill -TERM 524667 00:34:52.896 03:15:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.896 03:15:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:52.896 03:15:43 -- pm/common@44 -- $ pid=524703 00:34:52.896 03:15:43 -- pm/common@50 -- $ sudo -E kill -TERM 524703 00:34:52.896 + [[ -n 115757 ]] 00:34:52.896 + sudo kill 115757 00:34:52.907 [Pipeline] } 00:34:52.925 [Pipeline] // stage 00:34:52.930 [Pipeline] } 00:34:52.947 [Pipeline] // timeout 00:34:52.952 [Pipeline] } 00:34:52.968 [Pipeline] // catchError 00:34:52.973 [Pipeline] } 00:34:52.990 [Pipeline] // wrap 00:34:52.995 [Pipeline] } 00:34:53.010 [Pipeline] // catchError 00:34:53.018 [Pipeline] stage 00:34:53.020 [Pipeline] { (Epilogue) 00:34:53.035 [Pipeline] catchError 00:34:53.036 [Pipeline] { 00:34:53.051 [Pipeline] echo 00:34:53.052 Cleanup processes 00:34:53.056 [Pipeline] sh 00:34:53.339 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:53.339 524799 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:53.339 524928 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:53.353 [Pipeline] sh 00:34:53.635 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:53.635 ++ grep -v 'sudo pgrep' 00:34:53.635 ++ awk '{print $1}' 00:34:53.635 + sudo kill -9 524799 00:34:53.647 [Pipeline] sh 00:34:53.930 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:02.064 [Pipeline] sh 00:35:02.349 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:02.350 Artifacts sizes are good 00:35:02.365 [Pipeline] archiveArtifacts 00:35:02.372 Archiving artifacts 00:35:02.586 [Pipeline] sh 00:35:02.872 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:02.888 [Pipeline] cleanWs 00:35:02.899 [WS-CLEANUP] Deleting project workspace... 00:35:02.899 [WS-CLEANUP] Deferred wipeout is used... 00:35:02.906 [WS-CLEANUP] done 00:35:02.909 [Pipeline] } 00:35:02.931 [Pipeline] // catchError 00:35:02.946 [Pipeline] sh 00:35:03.227 + logger -p user.info -t JENKINS-CI 00:35:03.236 [Pipeline] } 00:35:03.250 [Pipeline] // stage 00:35:03.254 [Pipeline] } 00:35:03.272 [Pipeline] // node 00:35:03.279 [Pipeline] End of Pipeline 00:35:03.322 Finished: SUCCESS